On 11th of August we had the 2007 edition of the CSI programming competition. I decided the theme to be Harry Potter. The problems and the reference solutions are available online. We also have pictures. As you can guess from the balloons it was an ACM ICPC style contest. We (at least me and Joe) plan to send a team to the regionals to represent University College Dublin (and Ireland).

In this post I'll say a few things about the problems.
*I recommend you spend some time thinking about
them
before reading on.*

Problem A
was taken straight from the Harry Potter book. The trick is to *not*
start coding until you think it through. Most contestants actually
transformed the rules into code. Experienced programmers know that that
is a sin. Those kind of rules must be encoded as data, otherwise
it is a nightmare to get it right, or to adapt to changes in the rules.

Problem B
tests coding skill. Here Java has a huge disadvantage because the
standard IO routines are extremely slow compared to the standard IO
routines of C. Complaining about this is pointless. First,
fast IO can be done in Java at the price of writing C-style code.
Second, no one stops you to use C. That aside, my C solution works in
a little over 0.3 seconds on the test data (on my computer) so the
60 seconds limit should be enough even for Java, provided that the
implementation has the correct complexity. What is the correct
complexity? When you are trying to see if the word at index 0 is the
start of an *evil almost-anagram* you look at the words 0, .., n
such that they have at least 11 letters. The first important observation
is that there is no need to look at 0, ..., n+1. The second important
observation is that, when you look at word 1, you can start with
1, ..., n, because 1, ..., n-1 is guaranteed not to have enough letters.
Hence, the solution should be O(M+N), where M is the size of the input
and N is the size of the reference string, **not** O(MN).

Problem C is easy for anyone with experience in algorithm contests and hard for others. The solution is a simple BFS.

Problem D
is by far the most interesting one. It is more or less taken directly
out of an article by Kleinberg, Papadimitriou, and Raghavan on
Segmentation Problems.
They give a theorem that says that the problem has a O(C^{3})
solution using dynamic programming based on the observation that
optimal groups are made of clients that have consecutive
optimal prices. They omit the "obvious" proof. I have no
idea what solution they have in mind, for mine is O(C^{2}).
Since the winnings are (*a*−*b* *x*)*x*
the optimal price is *a*/(2*b*) and the maximum winnings
are *a*^{2}/(4*b*). These formulas hold for individual
clients but also for groups of clients, with *a* and *b*
being sums in the latter case. One approach is to try all possible
ways of grouping clients and the use these formulas to get the
optimal price and profit for each group. This is too slow. Suppose
now that we know the optimal group prices and we try to figure out the
groups. Because the winnings are given by a function symmetric around
the maximum we should sell to each client using the group price closest
to its individual optimal price. This shows that the optimal groups
are indeed made out of consecutive clients, when sorted by their
individual optimal prices.

## No comments:

## Post a Comment