Friday, March 28, 2008

When guitars meet computers


I am always surprised by the ratio of amateur artists among engineers. For example, we are 5 guitarists among my 10 closest colleagues in the office.

I guess it's likely the need to balance rigid computer logic with forgiving art. Music is a frequent choice, guitar seems the winner over piano (because of yet another keyboard?).

But the interesting point is that one kind of guitar, the electric guitar, wakes up the computer geek when he plugs the guitar into the microphone outlet of a basic sound card: it works.

The geek has just entered into a new kingdom, the DSP (Digital Signal Processing) land. There are tons of software, mainly VST plugins, that simulate digital effects and turn a common PC into the equivalent of hundred of kilos of racks of hardware and kilometers of cable. As described in this page, in French but with a lot of images, the software is rich of attractive GUIs with buttons, sliders, and visualization gadgets. All you need is a computer, a basic sound card, decent speakers, and the software. With very few investments the result is impressive, because the sound is really great.

The geek is now ready for his first quest: the Perfect Sound.

Once he's satisfied with the sound, let's aim to the second quest: the content. Here is the second advantage of the computer: internet is a huge repository of songs, guitar tablatures, guitar lessons, and even video of guitar players.

Then it's not very fun to play alone, and here again the computer helps: it provides an orchestra, playing mp3 or midi songs along the guitar.

And here I take my revenge over the computer. It rejected my program because I forgot a semi-colon, I impose it all my rehearsal with always the same faults at the same place. And when it's finished "play it again, Sam". For once, it follows all what I want it to do.

More precisely, I'm specialized in Pink Floyd's solos, like Is there anybody there, Time, or Fat old sun. I'm very impressed by David Gilmour plays. The solos are usually slow-paced, with few notes, but each one sounds great and contributes to a beautiful harmony... He uses a lot of bends, which consists in pushing the cords on the fret to apply extra tension and then augment the note pitch gradually. Plus very small tempo shift, it gives a solo full of tensions that drag brain attention, followed by a relief as the play catches up back to normal pitch and tempo.

Gilmour's solos looks simple on the paper, but believe me there are very hard to work out with the same touch. Anyway. I'm having a lot of fun with the electric guitar plugged into my pc, the sound presets, the midi orchestra, ...

Thursday, March 20, 2008

The most expensive bug


1996 june 4th, the first european rocket Ariane V exploded 40 seconds after launch. The payload alone cost about US$370 million. The cause ? A bad cast in the software initiated a chain of dramatic errors and led to Ariane destruction.

The full report worths the reading, here is a summary.

The attitude of the rocket is given by an Inertial Reference System (IRS), which is a combination of gyro lasers and accelerometers. This critical piece of hardware sends a stream of data about position, height, speed and acceleration to the main computer, which controls the exhaust pipes and drives the rocket along its expected trajectory.

Ten years ago, on Ariane III, a software function performed pre-flight checks to test alignment of the IRS. This function was no longer used on Ariane IV, but still ran during take off. You know it's easier to leave harmless code than to remove it. This function used 8 variables, 3 of them were not correctly protected although it was not an issue, because the rocket trajectory remains in range of these 3 variables.

No surprise, this function was still working on Ariane V. Unfortunately, Ariane V trajectory was a bit different and now one of the variables, the horizontal velocity, casted from 64bits float to 16bits integer, went out of range and raised an uncaught exception.

So far, no big deal. A check function raised an exception. Let's forget the check function and resume the mission.

However, the assumption on Ariane design was that software is always right and hardware may fail. The software reported an error, interpreted as the SRI was out of order. Then the SRI was shut down.

That's probably the biggest mistake. A failing unit test is embarrassing enough, but doesn't always mean the software is out of business. In this case the SRI still delivered reliable information. Unplugged, it couldn't any more.

The backup SRI started providing replacement data, and was shut down 0.05 second later, because of the same bug. Once again the assumption "hardware may fail, software not" made the backup SRI totally useless in this case.

Without sensible guidance, the rocket was doomed. But to accelerate the disaster, the SRI modules started to send stack traces instead of normal data to the main computer. The computer interpreted the data just as if the rocket was upside down and went into an emergency half turn. It started to tear apart under the physical constraints and initiated self-destruction process.

The story is sad enough like that, no need to add that a suitable test or full simulation before the flight would have found the bug.

By chance, I knew one of the member of the investigation team. He told me something not in the final report: what greatly contributed to kill Ariane V is the absence of experimented computer scientists at top management. The sofware components were simply divided and individually conducted. A competent software supervisor with suitable power could have found one of the errors, and prevented the cast exception to eventually stop the delivery of correct IRS data.

But Ariane was a physicist toy, they didn't share with a software department...

PS: The lesson was positively received. Today Ariane V is very successful and crashed only once more in 37 flights.

Friday, March 14, 2008

Marcel-Paul Schützenberger and complexity

It may happen in your lifetime that you have the luck to meet extraordinary people. MPS is one of them. I attended his lectures when I was a student at the university (Paris 7). This guy is not really famous, he didn't look for fame. We were always less than 10 in the audience. So who is he ?

MPS is a physician, a mathematician, and a computer scientist. Obviously he is excellent in all three domains. His various knowledge gives him a very realistic vision of computer science. Basically, computers were for him a tool to develop mathematics, and have a great future in biology. The last part sounds obvious today, but he said so more than 20 years ago.

But what makes him so attractive during the lectures is that he has stories to tell. As a founder of modern computer science, he met all the other founders all over the world (ok, mainly in USA), invented a very important theorem in language theory, and so he has a lot of nice anecdotes to tell about this pioneer period. I barely remember a couple of them I keep for another blog entry.

For the moment I want to focus on a sentence he said, which is carved in my brain forever:
"There are 2 kinds of program. The short ones and the long ones."
This can be understood different ways. I think the basic idea is that we should keep our distance from computer power and program complexity, and whatever we are trying to develop, after all it's only a computer program, so let's just break it down in sequence of instructions.

For example in the 80's, the hype was on Artificial Intelligence. For a majority of people AI was the most complex applications we can even think about, so much complex that nobody could complete them, btw. Eventually AI died because it didn't fulfill its promises. Some blamed computer performance, although computer power doubles every 18 months it will never catch up with AI complexity which scales up to infinity. Other blamed the poor expressivity of computer languages, too low level. That's better, but no. The real reason that killed AI is the lack of theory. Without theory, no suitable language and no proof of algorithm termination. Then your computer program can be as long as you want, it won't implement correctly the specifications (or you won't be able to prove it). Conclusion is that the program doesn't make it all by itself, it's only a tool and it won't cope with a hole in the theory.

Note that MPS didn't want to under-evaluate complexity in programming. For the developer, the complexity is relative to the constraints s/he has to deal with, about the programming language, tools, software architecture and design, and available resources. But the program is only the implementation of an algorithm, and a correct and efficient algorithm relies on a strong theory.

Friday, March 7, 2008

The Next Programming Language



Everybody knows Moore's law: "Computer performance doubles every 18 months". But programming languages have also their growing law: "A very new language appears every 10 years". "very new" is indeed relative, and should be understood as "language with new features and successful". Let's review the last ones:

  • 1972 : C was the first high level language bound to an operating system (unix). For the developer, it means a very fine grain control on the host machine and the ability to use a high level language to program at low level.
  • 1983 : C++: C with an object oriented layer. Note that one of the most claimed feature was the portability, which happened to be a disaster (C++ libraries were less compatible than C ones). C++ went to far. Macro, ability to override the operators, multiple inheritance made eventually the applications a nightmare to maintain and to integrate.
  • 1996 : Java: Object oriented, native multi-thread, GC, beans, exceptions handling, no macro, no multiple inheritance, clean packaging, portability, applets as the very first browser plugin, etc. A lot of advantages.
  • 200* : Web scripting languages. javascript, php, flex, XUL, etc. We can't say that one is the leader, but for sure they all contributed to empower the web and to make the web experience as sophisticated as a full fledged application. Some people says it's rather .NET/C#. Sorry, I disagree. There is no revolution with C#. It stays in between.
  • 2010 : What is waiting for us ? My guess is a new language that will support multi-thread as simply as Java managed the memory for the developer.
Multi-threading is really calling for a new language. The multicore architecture need a fine grain control over thread dispatching across the cores. When 2 threads are expected to communicate a lot, they should be running on two close cores so they can share the same L2 cache.

Most of multi-threaded languages, like Java, offers synchronization by locks which is probably simple to implement on the OS, but surely the most difficult for the developer. I really believe there are other viable solutions, like continuous transaction already working for database, which reliefs the developer synchronization logic and all race condition/block/starvation/CPU contention bugs. These bugs are a pain to track down and fix. We have almost no tool to help, and no background theory. Sigh.

Java is my favorite language, but I must admit it's out regarding multi-thread. The new JDK1.5 package java.util.concurrent helps a lot, but basically it doesn't get rid of the complexity.

Beside that, the next language will look very similar to Java, with smart packaging, Object oriented layer, GC, etc., and usual syntax for control statements.