Saturday, December 07, 2002

I wrote this piece on the corporate machinations behind Peter Jackson's movies of The Lord of the Rings a few months ago because I was trying (unsuccessfully) to impress someone. As The Two Towers is being released in a couple of weeks, and as a reasonable amount of effort went into writing it, I might as well blog it for public consumption

This is the story of how the corporate downsizing of AOL Time Warner led to a mad New Zealander making three movies of JRR Tolkien's The Lord of the Rings, the first of which was enormously successful, as undoubtedly will be the other two. A little of this story is supposition, although I think it is mostly pretty much true. There may be lessons for all of this in this story, although I am not quite sure what they are.

However, some background is necessary for it all to make sense.

One piece of background : in Hollywood, it is quite common for a film studio to fund early development of a motion picture, so that a script is written, a producer and director are hired, and maybe even actors are cast, and then to decide at the last minute that it does not want to make the film after all. In this instance, the film is put into a limbo that Hollywood calls 'turnaround'. This usually means that the producer is given a few weeks in which he can offer the movie to other studios. If in that time he can find another studio that wants to make the movie, then the first studio gives up the movie - usually after the second studio pays back the preproduction costs that the first studio has already spent. If the producer cannot find a studio that wants to make the movie, then the rights to the movie remain with the first studio. (Sometimes the reason that the studio declines to make the movie in the first place is because they are unhappy with the people attached to it, and their hope is that after the rights revert to them they can assign other people to make the movie. However, this is a risky strategy because if another studio decides they do like the project, a studio can lose the right to make a movie. This can be particularly annoying if after the movie is picked up by another studio it makes hundreds of millions of dollars and wins multiple Academy Awards).

A second pice of background: over the years many, many organisations have attempted to set up new, independent film studios. Often these have been companies that were involved in some other part of the media or entertainment business who wanted to make movies as well. Some of these have gone bankrupt, but in recent years, many of them have encountered another, more peculiar fate. Typically the parent company of the new, smaller studio, has been purchased by a large media conglomerate that already owned a larger Hollywood studio and had no use for the newer and smaller studio, and the newer studio has simply been merged into the larger studio. Sometime this has happened the moment the parent company has been acquired. On other occasions it has not happened until the smaller studio has made a misstep (which in such a hit or miss business as movies always happens pretty soon) or until the larger conglomerate has some more general restructuring.

A third piece of background: when producers are unable to obtain the entire cost of making a process called "presale of foreign rights" often takes place. In this case, the producers (or the studio who is going to distribute it in the US) shop the movie around to the companies who would distribute the movie in foreign countries and ask them to put up part of the budget. In return they will get the rights to distribute the movie in their country plus a share of the profits that would normally go to the studio that financed the picture. If this works well, about two thirds of the budget of a movie can be raised this way. There are some production companies (eg Peter Guber's Mandalay Entertainment) that specialise in making films this way.

In any event, in 1995 20th Century Fox decided at the last moment that it did not want to make "The English Patient", allegedly because Fox wanted Kristin Scott Thomas to be replaced by Demi Moore in the lead female role. (Yes, really.). The movie was placed in turnaround, and producer Saul Zaentz went looking for another studio. The film was picked up by Harvey Weinstein of Miramax pictures, which has quite a good record of picking up pictures that other studios have put into turnaround and turning them into hits. (Pulp Fiction and Good Will Hunting are two other examples). Inevitably the film made enormous amounts of money and won a pile of Academy Awards.

Miramax also had a relationship with New Zealand director Peter Jackson, having distributed his arthouse hit "Heavenly Creatures" in the US. Jackson had also made an assortment of appalling (but in a good way) splatter movies like Brain Dead and Bad Taste. Jackson had a burning desire to make a film of Tolkien's The Lord of the Rings. Saul Zaentz had produced the previous animated film of The Lord of the Rings in 1978, and still owned the film rights. As Miramax had bailed Zaentz out on The English Patient, Weinstein was able to call in a favour and persuade Zaentz to sell the rights to Miramax. Peter Jackson started working on the film, and Weinstein then went to Michael Eisner, boss of Miramax's corporate parent Disney, to ask for the money to produce The Lord of The Rings. Jackson intended to make The Lord of the Rings as two movies, shot entirely in New Zealand with his New Zealand crew and using a special effects house that he owned which was based in New Zealand. Jackson had never made a film on anything like this scale before, but was a filmmaker of obvious talent. (That said, his first Hollywood film, The Frighteners starring Michael J Fox, had been a failure).

The Lord of The Rings was going to be very expensive to make in the multi-film way that Peter Jackson wanted to make it, and Eisner balked and would not provide Miramax with the money to make the films. Therfore, the project was placed in turnaround and Peter Jackson was given the right to offer it to other studios. Apparently, Harvey Weinstein expected that Jackson would be unable to find a taker, and that the rights would revert to Miramax and the film could be made as a single film on a lower budget with another director.

It appeared that this would come to pass, as initially the project was passed on by all the major studios. On the last day he had, Jackson had only two studios left to go to. These were Polygram Filmed Entertainment and New Line Cinema. Both of these were not traditional studios but more recent attempts to create new studios. Polygram was a spinoff of the Anglo-Dutch music company of the same name, which was based in London and while small, had put together a good bunch of films and was on the brink of profitability. The people at Polygram liked the proposal, but said they wouldn't be able to raise the money. (In fact, Polygram had just been bought by Universal who had wanted its music assets, and the film division was about to be merged into Universal studios, although this infrormation wasn't yet public). The last studio on the list was therefore New Line Cinema.

New Line Cinema was company that had been founded by a man named Bob Shaye (known as one of the more obnoxious people in Hollywood, and notorious for once having sex in public at a Hollywood party, but I digress) to produce and sell what is known in Hollywood as "genre films": relatively low budget pictures aimed at a specific audience. (The Friday 13th films and the Nightmare on Elm Street films basically made the company). It had later moved into more upmarket projects with some success (Austin Powers, Rush Hour) and some failure (Last Man Standing, The Long Kiss Goodnight). In any event, the company had been bought by Ted Turner in the early 1990s as part of his efforts to turn his largely cable television interests into a full scale media empire. In 1996, he gave up on this and sold his entire business to Time Warner, who wanted the cable television interests and the various old film libraries he had bought over the years. Time Warner was not especially interested in the filmmaking interests, as they owned Warner Bros already. The two other filmmaking businesses that had been owned by Turner (Castle Rock Entertainment and Turner Films) were merged into Warner Bros, and for a while New Line Cinema was supposedly for sale. However, Warners did not find a buyer at the price they were asking. New Line was not merged with Warner Bros, supposedly because their cultures were too different. (Presumably Warner executives did not go for having sex in public). But, if you were Bob Shaye, you would have felt threatened.

In any event, in 1998 Bob Shaye was visisted by a mad New Zealander who had the rights to make a two films of the Lord of the Rings, but he had to direct and the films had to be made in New Zealand. Shaye was apparently a big fan of the Lord of the Rings himself, and was quite impressed by what Jackson brought him. However, it was brought to him on a take it or leave it basis. He couldn't choose the director or the locations or very much about the movie himself. Taking on such a big project with such an inexperienced director was an enormous risk, and one suspects he wouldn't have taken it if it was not for the Time Warner factor. Time Warner was eventually going to merge his company into Warner Bros, or massively downsize it, or close it, or something like that. (This has happened to every other studio that has got into this position). For this reason, Shaye seems to have said "To hell with it" and told Jackson that yes, he wanted to make the movie. He seems to have not only had the urge to take the project, but also one to raise the stakes while he was at it. He took the project, but as three movies instead of two. Since there are three books there is a certain logic behind this, but it was still a bold move.

New Line initially announced a budget of $150m for the three movies, and presold the foreign rights to small distributors around the world to raise most of the money. This budget is way too small for three movies of that magnitude, and it seems that Shaye knew that and was just lowballing the figure to keep Time Warner happy. As the films were made Shaye kept raising the stakes further and the budget crept up, with the final budget ended up at $270m. By current Hollywood standards, even this is not an excessive budget for three movies of this scale, and the budget was kept down by the low costs of making the films in New Zealand and the use of mid-list actors rather than big stars.(There were rumours that Keanu Reeves would be cast as Strider for a while, but this did not happen). However, it is a lot of risk, because if the first film had bombed, the money would already have been spent on the other two movies, which would in that case have been certain flops. The point seems to be that right from the beginning Shaye knew he was betting his company on the films. He knew what was coming. If he just allowed the corporate machinations to take place he was gone anyway, so he decided to make the film he wanted to make and go out in a blaze of glory.

And Time Warner did of course get into a mess. It merged with AOL, got caught in the dot com crash, hit the worst advertising market in some time etc etc. In 2001, it started slashing and burning divisions which were seen as non-core. Warner Bros, which was 'core' (although very badly managed) did not suffer any job cuts, but New Line, which was less 'core' was forced to cut staff numbers by 20%. Bob Shaye was told that he could no longer greenlight any movies with budgets over $50m. It seemed pretty clear that before long New Line would be gone as an independent entity.

But, as this happened, New Line had bet the company on the Lord of the Rings. And The the first Lord of the Rings movie turned out to be an enormous hit. The profits from the three movies will end up being in the billions of dollars. The decision to make three movies instead of two or one turns out to be a big positive, as this way everyone will buy three cinema tickets and three DVDs instead of one of each. (Or in the really fanatical cases, thay will buy 150 tickets instead of 50, and nine different special edition DVDs instead of 3). New Line is safe as an independent entity for now. (In the long run it is probably still doomed, as all this is likely to be forgotten next time there is a misstep).

But, for now, some stories can have happy endings. (At least one insane bearded New Zealander is now insanely rich. And I think a world in which insane bearded New Zealanders can become insanely rich by making ludicrously over-ambitious movies is better than one where this is not the case). As to how Harvey Weinstein feels about having his project go into turnaround and make large sums of money for another studio (and some Academy Awards, although not as many as the English Patient) I do not know, although I can guess.

Erratum: The person who had sex in public at a party was actually former president of New Line Cinema Michael De Luca, and not Bob Shaye. While the basic point - that New Line Cinema has at times been a pretty wild place - remains true, I regret the error and apologise to Mr Shaye.
Well, today I did something new. My laptop had a problem, and I actually managed a hardware repair myself, which actually required me to use a soldering iron. The story as to how I actually reached this point is long and boring, but given how practical I am not, I am somewhat amazed that it worked.

Friday, December 06, 2002

Following up what I had to say about how demographics affect the economic prospects of various countrues , the Economist has an article on the subject. (It's again subscription only, unfortunately).

Essentially this point is this. When you reach a certain stage of development, the number of children per mother drops. For a while, you have a situation where you have relatively few old people to be looked after (because the old people had a relatively large numbers of children, the number of old people per working age person is small) and relatively few children, because the birth rate has dropped. Therefore, a large portion of the population can devote itself to economically productive activity, and the economy can get a large, one off boost. Eventually, this large group of people gets old, and then you have a demographic brake, because they did not have that many children. If this drop in the number of children occurs too rapidly, as has occurred in Japan, then the economy can really hit the wall when this type of crisis is reached. The trick is to use the one off economic boost without knocking things too far out of balance in the long term.

This also means there is only one window of opportunity. Low fertility eventually leads to a rising proportion of older people, raising the dependency ratio as the working population goes from caring for children to caring for parents and grandparents. Nor is the demographic dividend automatic. The right policies—a flexible labour market, investment and saving incentives, provision of high-quality health care and education—are still essential to making the working-age population more productive.


If these observations are correct, developing countries entering their demographic transition have a unique chance to cash their population dividend. South Asia will reach its peak ratio of workers to dependants between 2015 and 2025. Richer Latin American countries have completed the transition, but poorer ones continue to lag; their peak is likely in 2020-30. In sub-Saharan Africa, however, only 11 countries are expected to reach that stage before 2050, and a lot still has to be done to reduce fertility levels. The rapid rise in AIDS deaths will also frustrate changes in age structure that would otherwise occur. And once the transition is over and the dividend pocketed, countries will face the next big challenge: how to care for the old.
The following sentence was found in a Shania Twain album review
As you'll discover, the blue versions strip off just about everything from the red/green versions except Shania's voice (left surreally untreated), replacing them with a ghastly quasi-polyrhythmic clatter something like a troop of OCD lemurs trying to imagine how a tense coalition of former minor Men at Work and UB40 members would rewrite Atari Teenage Riot songs as Eurovision entries.
Okay, I have now added a site search feature to this blog.
The Economist now has an leader (subscription only, unfortunately) arguing that Turkey should be given a clear timetable for being allowed into the EU. There's nothing new here: Turkey probably won't be ready to actually join for 10-15 years, but we should begin negotiations, help them get ready, and proceed in good faith. We are hearing strong arguments along these lines from Americans and British pundits. The hostility is coming from France and Germany. Which, sadly, is France and Germany's failing.

Meanwhile, the US are warning about possible terrorist threats in Turkey. This warning is about a potential attack on US facilities in Turkey, but I have heard occasional rumours over the last year about foiled or planned attacks on Ataturk's tomb in Ankara and the like. (The thing that strikes me about Turkey, however, is that if terrorists did blow up Ataturk's tomb, there would be a huge uprising of Turks, in opposition to the terrorists). The US State Department concludes with

The Turkish government has taken ``prudent measures'' to deal with the possible threat, the department said.

The statement referred to Turkey as a country that fully cooperates with the United States in the war on terrorism.

In the case of Turkey, unlike in the case of Saudi Arabia, this statement is actually true.
Admittedly, I probably wouldn't be posting this if I came from Queensland
You know, the strategy of leaving Steve Waugh out of the Australian one day cricket team and all the preliminary squads for the World Cup, and then bringing him in to the final squad at the last moment, and then brinking him into the actual team halfway through the World Cup might have some merit to it. He seems to be getting a little pissed off .

Waugh made 24 not out from 12 balls but in five deliveries from all-rounder Ronnie Irani, he dropped a fierce pull-drive onto the first-floor balcony of the Members Stand, smashed a second six into the spectators in front of the same pavilion and ended the over and the match with another straighter drive over the long-on fence.

And these stickers were apparently on the packaging of the overpriced pies for sale at the SCG last night.

Thursday, December 05, 2002

The Onion again

Above: Implicated in the presidential-lying scandal are George Washington, John Adams, Thomas Jefferson, James Madison, James Monroe, John Quincy Adams, Andrew Jackson, Martin Van Buren, William Henry Harrison, John Tyler, James K. Polk, Zachary Taylor, Millard Fillmore, Franklin Pierce, James Buchanan, Abraham Lincoln, Andrew Johnson, Ulysses S. Grant, Rutherford B. Hayes, James Garfield, Chester Arthur, Grover Cleveland, Benjamin Harrison, William McKinley, Theodore Roosevelt, William H. Taft, Woodrow Wilson, Warren Harding, Calvin Coolidge, Herbert Hoover, Franklin Roosevelt, Harry Truman, Dwight Eisenhower, John Kennedy, Lyndon Johnson, Richard Nixon, Gerald Ford, Jimmy Carter, Ronald Reagan, George Bush, Bill Clinton, and George W. Bush.

The bizarre world of fashion

Leading fashion magazine Vogue Australia has been accused by nutritionists and anorexia counsellors of encouraging potentially dangerous weight-loss techniques.

An article in the latest edition, about how models lose weight, conveyed dietary information that was confusing, wrong, obnoxious, irresponsible and potentially dangerous, nutritionist Rosemary Stanton said.

The article quotes Miami personal trainer Ruddy Esther as saying when models need to lose weight in two weeks he sends them on "45-minute runs every morning with nothing but sugar-sweetened coffee and an over-the-counter fat-burning pill in their stomachs".

Wow, if you want to weaken you heart dramatically in order to die young, or perhaps even collapse and die on the spot, this is a great way to go about it. But there's more.

A third (trainer), Joe Dowdell, says it is entirely possible that a 1.78-metre tall model, weighing 52kg, may need to lose body fat.

No it isn't. Ugh. Humanity in the modern world has developed some truly odd pathologies. Who could possibly want to live like that.

As a heterosexual male, I will merely observe that women with a few curves are much more attractive than those who are starved and skinny. Most men of my acquaintance seem to feel the same way. Where did this come from?
Every now and then, someone in Britain publishes a book or makes a TV program which extoles the virtues of some Briton who did something important. A few years ago, Dave Sobel wrote the book Longitude, which told the story of John Harrison, who in the 18th century succeeded in producing watches robust enough and accurate enough that mariners could then determine their longitude. (Longitude is hard, as you must know both the time and the position of the stars. Latitude is easy, because you can do it with the stars alone). People bought this book in huge numbers, and Harrison's achievements are now consequently quite well known. (Harrisons clocks and watches can today be seen in the Greenwich Observatory, which is well worth a visit, also because Greenwich is an extremely nice spot, with a large park including the observatory surrounding the hill, and a nice view.

The one good thing to have come out of the BBC's recent effort to find the Greatest Briton is Jeremy Clarkson's superb documentary about Isambard Brunel. He isn't the greatest ever Briton, but the British are much more aware of what he did than they were a couple of months ago. You hear his name being mentioned in the media and other contexts now. (How long this will last, remains to be seen). For now though, his is great. Just this morning I was on a train approaching London Bridge from the south-east. There was a rather well spoken English chap sitting near me, having a conversation with a couple of travelling companions. He was talking about the various buildings nearby, and how warehouses were being refitted as residential property, and how Dickens had once worked in one of the arches of the railway to Greenwich etc. He mentioned that he had been through the Rotherhithe tunnel, which had been built by "Brunel" in 1840.

I was very tempted to pipe up that there were actually two Brunels: and the tunnel , which was the first tunnel in the world to be built under a navigable waterway, was actually built by Marc Brunel, not Isambard. (While Marc Brunel was an important engineer, he was nowhere near as important as his son, which is why if you just say "Brunel" you are referring to Isambard). However, this is England and you don't do this. (If you are in London and are going to Greenwich to see the observatory, it is well worth going through the tunnel, however. To do this, you have to go on the East London Line of the underground, which is orange on Harry Beck's famous map.

Wednesday, December 04, 2002

Okay, here is the piece on cellphone software that I accidentally posted part of the other day. It's long, and still a little rough in places. I might revise it later.

There are articles in both Salon and The Economist on Microsoft's attempts to get into the cellphone software market. However, the slants of the two articles are quite different from each other. The Economist presents it as a battle between Nokia and Microsoft. Salon barely mentions Nokia, but talks about the various non-Microsoft software that is avaliable.

I think that Salon is on the right track here. In my mind The Economist dramatically overstates the position of Nokia. I don't think Nokia is going to be an important player in the cellphone software market, and I will get to that later in this posting. The Microsoft case is more interesting.

A great deal of effort over the years has been devoted to the question of "Who will be the Microsoft of this market", whatever technology market is being discussed. And that is entirely the wrong question. None of these markets ever get themselves a Microsoft. The question is why precisely there is a Microsoft in the PC market. To get the answer to that, you have to go back to the very beginnings of the PC industry, around 1975. This was an industry being created by ex-hippies in the San Francisco Bay Area and in a few other areas of the United States. This was being done by people in garages, and nobody had any idea how important that the industry was going to be. (The best picture of this industy in its early state is Fire in the Valley by Paul Freiberger, The second section of Steven Levy's Hackers also gives the gist of the story in a shorter form with fewer digressions). The exception was Bill Gates, who was talking about "A computer on every desk, and every one running Microsoft software" as far back as 1975. He produced a software item (Microsoft Basic) that it was almost mandatory everybody own incredibly early,and he managed to define the intellectual property basis under which the PC software operated. He managed to leverage his dominance of programming languages on PCs into dominance of operating systems on PCs into dominance of the PC application market. Microsoft has used the strategy of embrace and extend (or embrace and destroy , as some of Microsoft's competitors describe it) ever since. Loosely, people have to buy your operating system. When someone invents a new feature that most computer users are going to want, include it with Windows, and then people will use your version that comes with Windows rather than the version they have to pay extra for. This has expanded the idea of what an operating system is. This did not have to happen, but Gates got define the business in the image he wanted.

What we have ended up with is a situation where to do business, you have to run Microsoft Windows and you have to use Microsoft Office applications. To apply for a job electronically, you need Microsoft Word, because people expect your CV to be in that format. Essentially if you are going to function in an office environment, you must pay a toll to Microsoft. Once you are using Microsoft, you are locked in, the Microsoft software you use uses file formats that are incompatible with everything else.

However, this is not the natural state of affairs. It exists because of Bill Gates' early understanding of how the market would develop and through his ruthlessness and shear power of will. If you are developing new markets and new products from scratch, open file formats have a big advantage over closed ones. If one product is incompatible with everything else, and others are all compatible, the majority of the industry will be using the compatible ones before long. It is only when people are locked in that incompatible, proprietary formats win. For that, the proponents of the incompatible formats must move first. And Bill Gates moved first, all right. In fact, he did so long before anybody else understood what he was doing.

None of this is going to happen again. For one thing, this time everyone knows what Bill Gates is doing. Nobody is going to allow one company to dominate the way he would like to. For another thing, the open file formats have already been invented. (Essentially they are called "XML"). It is going to be very had to get people to use proprietary formats instead.

As an example why, consider the internet itself. In my daily internet experience, I use the underlying TCP/IP protocols. These were invented by the Adanced Research Products Agency of the US Department of Defence. Over this, I use a web browser. Sometimes I use Microsoft Internet Explorer. Sometimes I use the open source Mozilla. In both cases I am using the HTTP protocol to transfer data that is mostly stored in the HTML markup language. Both of these things were invented by Tim Berners Lee at CERN, and further development of them is in the hands of the World Wide Web Consortium. If the web page contains pictures, these are usually in the JPEG format invented by the Joint Photographic Experts Group. If I access music, it is normally in the MP3 format developed by the Moving Pictures Experts Group (MPEG). If I run a game within a web page or show animation, it will run within the Java Runtime environment, invented by Sun Microsystems, or it might use Javascript, or it might use Flash. If I wish to play video I have downloaded, it might be in the Quicktime format invented by Apple, a Windows Media format invented by Microsoft, or the MPEG-4 format invented by MPEG. (MPEG-4 also consists of a number of alternatives, called profiles). The underlying operating system that everything is running on may be Microsoft Windows, or it may be MacOS or it may be Linux, or it may be FreeBSD, or it may be one or two even more obscure things.

The internet is old. It has existed in some form since the late 1960s, and it was extremely common running on Unix systems (and VMS, and even eccentric things like ITS) in universities, and defence and other research establishments in the 1970s and 1980s. (I used it myself in these environments from 1987). It had developed its own (large) code base, and its own standards for data transmission and the like before it came into the mainstream. The last piece of development that was done principally in the Unix world was the development of the World Wide Web by Berners Lee at CERN and of the first graphical browser by Marc Andreesen at The University of Illinois. At that point, the internet hit the mainstream and the internet's code base and standards collided with the code base and standards of the PC world, which belonged to Microsoft. At this point, huge numbers of people started developing new applications to go with the internet, which ultimately led to many of the things that I described above. Microsoft has repeatedly attempted to embrace and extend its position of dominance in the PC world to take over these new technologies, and ideally the internet technologies themselves, but it has never really succeeded. Oh yes, there are pieces of Microsoft software in the above. Some of these are quite commonly used. However, nowhere in the above does Microsoft have anything resembling a monopoly. In virtually all the portions of the internet software market I describe abover, there are alternatives, and my system understands all of them. If one alternative becomes too expensive, or too closed, another becomes dominant. (An example of this is the GIF picture format. This belongs to AOL, and a few years ago, AOL started making financial demands that were perceived as unreasonable on the people who used it. People responded to this by ceasing to use it, and instead using the rival JPEG format, that has much less onerous terms of use. (An advantage of having computers connected to the internet is that software to support new formats can be downloaded almost instantaneously if you with to buy software. The interfaces and in many cases the source code of all this software is open. It is extremely easy to interface with it and adapt it for your use. (This is much harder to do in the Microsoft based PC world).

If Microsoft did not exist, I believe the PC market would be in a similar position to the way I have described the internet software market. Your desktop PC would contain a wide assortment of software from a wide assortment of vendors. This would interoperate using standard file formats that either belonged to no particular vendor or to a variety of different vendors who licenced the formats on very easy terms (probably terms in which users did not pay to use them). Software would be written in such a way that functionally identical software would run on a variety of different operating systems. (This isn't that hard: at the moment Microsoft provides Office for two different operating system families of its own (Windows 95/98/ME and Windows NT/2000/XP) and one belonging to someone else (MacOS)). In any event, the idea of what is an Operating System would be smaller than it is now, as Microsoft would not have embraced and extended . There would probably be more graphical user interfaces than one standard one.

This, much more than Microsoft hegemony, is the natural state of things. This was where the PC world was before the advent of the IBM PC and MS-DOS in 1981, and this is where it would have stayed without Gates' influence. There would have been a convergence to compatible hardware, but there would not have been convergence to a single hegemonic software structure.

Bill Gates' argument would be that the additional standardisation that has occurred due to Windows dominance led to greater economies of scale, and therefore more rapid development. I cannot argue that this definitively was not so, but I do not believe this. I think that with more open software and standards, development would have been faster. My belief is that due to Microsoft having established a toll on your desktop, development in the PC world has been retarded rather than advanced.

In any event, the desktop PC appears to have reached the end of its growth phase about five years ago. Essentially, at about the point when Windows 98 was released, there have been no new compelling PC applications invented, and no significant additions to the features of the ones we have. (Microsoft's subsequent operating systems have been more stable, but have had little in the way of new compelling features). There here been no new, innovative applications for which the PC is the key. There have been a whole variety of internet based applications, but the bottleneck restricting how they work is the speed of the networks, not the speed of the PC. So we find ourselves in a situation where people have less desire to upgrade their PC as often, and where the price of PCs is slowly dropping. (For quite a few years, PCs stayed around the $2000 price point, and people paid the same for a PC with more features. However, the price point has clearly dropped). As Microsoft's business model is based on high prices and high margins, in the long run, this is not good news for them. The greater the percentage of the cost of the PC that is being paid to Microsoft, the greater the incentive for people to go and look for their software elsewhere.

There is innovation in hardware - be it scanners, digital cameras, camcorders, and whatever, but Microsoft doesn't control the market for driver software for these. And there is in the game market. New games continue to be invented, and these games continue to drive the PC hardware market. However, the PC Games business plays by an entirely different set of intellectual property rules than the rest of the software industry anyway, anyway. Games are much more open, and invite modification from users, and so deliberately have open code. (In the long run, games are becoming networked too, and again we are heading for a situation where network speed is the bottleneck). The fact that the innovation is in these areas is making PC software more cluttered and less Microsoft dominated too. I am not saying Microsoft is going to lose its dominance of the desktop any time soon: I will leave that to Eric Raymond . However, it isn't going to be able to rake in enormous margins on PC software forever, and Bill knows this.

So Microsoft has to expand elsewhere. Ideally they would like to embrace and extend some more. It's obvious to everyone that PDAs and cellphones are going to merge with each other, and this is going to be a platform that is at least as important as the PC, so that is one thing Microsoft is working on. Again they are trying to embrace and extend their present Windows business into the cellphone world. The Salon article is full of comments by Microsoft about how people should want to use Windows on their cellphones because they already use Windows and it is good to use a common interface. This seems to be the only way in which MIcrosoft is capable of embracing and extending. Follow us, and you will get to use a common interface with your PC. This is a much weaker force of coersion than saying that because you want to buy some PC software from us, we will compel you to buy other things as well.

Now, if people used Microsoft's interface because it was a great interface, then Microsoft might get somewhere with this strategy. However, they don't. They use it because it is adequate and because it is a monopoly. It is a utilitarian, desktop interface. It's largely something you use at work. However, phones are not utilitarian desktop devices. They are things we carry around in pockets and purses, and they have evolved into fashion accessories as well. And, interestingly enough, those bits of them that are customisable have been customised in a big way. People are constantly downloading unique ringtones, and changing the colour of their casings and the like. Nokia have been masters of taking advantage of this.

Back in the PC world, one interesting trend is that for software applications that are not boring desktop office applications, the ability to customise seems to be much more common than for office applications. Consider media players and all their "skins" and cutesy controls. These applications are more to do with leisure and have a certain cool factor, and non-conformism seems to be the rule. Or perhaps it is just that Microsoft does not control the market, and does not get to rule that things will happen its way.

In any event, smart phones are more like Media players than Microsoft Word. Software is turning out to be more diverse and more customisable. The first attempt to create software for smarter phones was WAP 1.0. This was designed to give operators complete control over what went on on people's phones, and it was dead in the water. In Japan, we had i-mode, which used much more open software standards, and was a huge success, and had things like Java support added on later. We have the various 3G standards, some of which require support for certain software formats . And we have operating systems like Symbian and user interfaces like Nokia's Series 60, which is basically just a laundry list of media formats and software tools (many of which belong to other companies) that Nokia believes smart phones should support.

The basic picture is that we have a groups of families of interoperable software tools with lots of overlap. The interfaces between these are open, and we are likely to be in a position where there will be a free for all to develop new software and new tools. If the conditions provided by the owners of any one part of the software ecosystem are too onerous, then likely somebody else will produce an alternative that will take its place. In any event, in most cases the software ecosystem will support several products in each niche simultaneously, which will keep the market competitive. To me, this seems a great position to be in. Microsoft might contribute some software to all this, but the chances of it gaining a Windows like stranglehold over the market are negligible. For one thing it is far too late in the development of the mobile phone market for that to happen. For another nobody wants it to happen and given Microsoft's history, everybody will be suspicious of Microsoft's motives.

The one further issue that needs addressing is in the Economist article. The author seems to think there is a possibility that Nokia may use its strength in the handset market and economies of scale to turn its Series 60 user interface into a Windows like standard for the cellphone world. This strikes me as hugely unlikely. For one thing, Nokia doesn't own all of the standard. As I said above, Series 60 is a mix of lots of standards and formats, many of which do not belong to Nokia. And, more importantly, to be truthful Nokia doesn't have very much market power.

Nokia may have good logistics and one of the best brands in the world, but Nokia is and always has been a mediocre engineering concern: it makes beautiful phones with a nice user interface, and is able to achieve astounding margins because of this, but in terms of features, the phones are behind both Motorola and Ericsson. (Look at how long it has taken to build a triband GSM phone). While Nokia has sold a lot of handsets, most of Europe's 2G networks were built by Ericsson.
Nokia's strength, above everything, is the strength of its brand. And however much the marketroids claim otherwise, brands are fragile.

Coca-Cola's strength is not its brand: it is its global distribution system. I have never been anywhere in the world where I could not buy Coca-Cola in every store. I have been places where there was no electricity, but there was still Coca-Cola. Nobody else is remotely close in terms of a distribution system, so nobody can challenge Coca-Cola.

However, Nokia doesn't have anything, except its brand. The distribution system belongs to the mobile networks and big retail chains. Whereas you cannot function in the world today without paying for Microsoft products, Nokia has nothing compelling. Anyone can use another brand without any big loss to them. It is therefore difficult for Nokia to gain much leverage to get other operators to use identical software. Even if it does, Nokia's control over this software is not going to be Microsoft-like. Nokia's position is also weakened I think by the inadequacies of UMTS compared to CDMA2000. Although all the software mentioned in this article will likely work on either platform (plus existing 2.5G platforms like GPRS), Nokia's hardware may well be slower and buggier in the short term than that from some other manufacturers (eg Samsung). This could hurt the brand too.

I am not in fact that pessimistic about Nokia. Nokia will continue to hold a large share of the handset market for quite some time. However, the basic point is this. Neither Nokia or Microsoft will be the Microsoft of the smart phone software market. And this is a good thing.

Do You Want to Employ Me? As I have mentioned before, I am a presently out of work technology and telco analyst, who previously worked for a large financial institution. If you would be interested in offering me a job or if you know anyone who might be, my CV is here.
John Quiggin makes the exceptionally good point that Paul Sheehan's argument about Christianity being potentially overwhelmed by relatively fundamentalist poor-world Christians relies numerically on the presence of 480 million Catholics in Latin America, and that although conservative, these don't strike him as likely to have great crusading zeal. Latin American Catholics strike me as not being too different in character from Spanish or Italian Catholics a few decades ago: they didn't strike the Enlightenment as directly as did norther Europeans, but as their countries became richer, they absorbed most of its values anyway. This is my expectation for Latin America too. (This is consistent with what I was arguing the other day).

There are troubling things about Christianity in Africa, however. In the years since, a number of sources have reported that nominally Christian leaders aided and egged on the genocide (whereas Islamic leaders generally tried to stop it). I don't understand how or why this happened, but I need to make an effort to.
I see that Will Smith may be cast in Alex Proyas' film of Isaac Asimov's I. Robot . The choice to cast an African American actor is interesting, given that in the original book, the question of slavery is frequently raised. That is, if you create an artificial and intelligent robot, that none the lest has to obey you, is this akin to slavery? Of course the film likely dispenses with this aspect of the stories entirely. (To some extent I suppose it depends on whether Smith is playing a human character or a robot character. The most important human character in the book is a woman. Many of the 'male' characters are robots).

I'm hoping for a good film here, anyway. I have faith in Alex Proyas. Dark City was a very fine piece of work. There has been talk of an I, Robot film for a very long time. Harlan Ellison wrote quite an interesting screenplay a few years back, but this may or may not have worked as a film. The book is structurally difficult to film. It consists of a series of short stories, with a number of common characters and common themes, and a certain amount of connecting narrative. Telling the whole story (as Ellison's screenplay tries to) would lead to an oddly structured film. It seems that the current idea might be just to film one of the stories. This would at least be easier.

Of course, the last time an Asimov work was filmed, the results were at best mixed. The original novelette, The Bicentennial Man , was one of Asimov's finest pieces of work. It's about Andrew Martin, an intelligent robot who slowly grows more complex and more self-aware, and more and more organic and less mechanical, and who finally wishes to be legally declared human, basically because there is something extraordinary and special about the human condition (I would say something divine, but Asimov would probably have objected to the religious undertones of the word. He was a great humanist, but was not a man of religion). In order for this to be allowed to happen, he in the end gives up the immortality he has as a robot and agrees to die. It is an extraordinarily moving story, and the first half of it was filmed both well and faithfully. However, the second half of the story was changed so that instead of wanting to become human because he desired the human condition, Andrew Martin wants to become human so that he can legally marry a woman. The entire "love of a woman" business was not in the original story, and while the love of a woman is no doubt a fine thing, in relative terms it rather mundane. By making the change, director Chris Columbus and screenwriter Nicholas Kazan ripped the emotional heart out of the story. And that was sad.

Dear Hollywood. In making a film of I. Robot , you are again working with some of Dr Asimov's very best work. Please don't mess it up this time.

Tuesday, December 03, 2002

The New York Times has an article on the Dutch film Necrocam

Toward the end of "This Is Our Youth," Kenneth Lonergan's play about disaffected New Yorkers set in 1982, the characters learn of an acquaintance's death. The news spooks the motor-mouthed Dennis into pondering the benefits of religion when confronting the afterlife. "How much better would it be," he asks, "to think you're gonna be somewhere, you know? Instead of absolutely nowhere. Like gone, forever."

Fast forward to 2001, when the Internet has given the youths in "Necrocam," a 50-minute film made for Dutch television, a less conventional way to cope with death's mysteries. Christine, a teenager with cancer, tells her friends that upon her death she wants a digital camera with an Internet connection installed in her coffin. Images of her decaying remains will then be transmitted to a Web page for all to see, making her virtually immortal.

The Times sees this as symptomatic of how all pervasive computers and the internet have become for the younger

The movie's accomplishment is to capture the way technology, including the Internet, has permeated contemporary culture. This is our youth's daily existence. The film's young people communicate through online messages, play computer games and record their pledge with a video camera instead of a quill dipped in blood. For them technology is an extension of life. So it is only logical that cyberspace would play a role in death.

generation. I can only confirm agree with this.

I was at high school in the 1980s, an undergraduate from 1987-1991, and a graduate student from 1992-1997. My high school got its first computers in 1982, and this was the first time I had seen a personal computer. As I was studying computing science at university, I had access to a Unix machine connected to the internet from 1988. In all cases, I was discovering something new. In high school, the teachers and I were still discovering what computers were. When I was an undergraduate, I wasn't taught anything about the internet: I simply discovered that Usenet was there, and that I could sent e-mail to anywhere in the world by playing around. When I was a graduate student, my contemporaries in the non-scientific world were just discovering e-mail, and even rather brilliant Arts students in some cases did not use computers almost as a matter of pride.

I have used computers since I was 12, and the internet since I was 18, but this was always somewhat novel. In my generation, there was something novel, and geeky, and antisocial about this fact. Socially, the fact that I used computers made me unusual, and was an isolating factor. Now, however, the reverse is true. The generation ten years younger than me have been using the internet at least since they started high school. Even the arts students are immersed in it, and cannot imagine a world without it. An English graduate student is going to use it all the time for communications, research, to feed her Buffy habit when she is out of the country, and everything else. The idea that one would not is inconceivable. I know a lot of the history of computers and the net, because to some extent I lived through it, and in any event I find researching these things to be fascinating. However, for the younger generation, it is just there. It's part of life. It's the opposite of an isolating factor. This is quite different to how it was for me.

Some more thoughts. When I was a graduate student in mathematics and physics a decade ago, it was not possible to be a graduate student in these fields and not to find computers used pervasively in your work. Your mightnot have directly required them for your research (although more and more fields do, and mine certainly did) but computers were all pervasive in the field, and you used them for communicating with other researchers, writing and submitting papers, and just as part of life. There was, and undoubtedly still are, however, (often very senior) scientists who were still active who resisted this trend. In their day, computers were not widely used, and they did their work by the old fashioned method. We would generally regard these people as a bunch of old fogeys. Old scientists did not necessarily belong to this group (as computers were invented in the 1940s and 1950s, and some Cambridge mathematicians were some of the first people to use them) but some did.

Other fields (eg biology) have been transformed in exactly the same way). Fields in which this has happened have inevitably been transformed by it, as what a researcher and a computer can do is different from what a researcher without a computer can do. (Oddly, I think it has much to do with using a computer for computation in most cases, too. The areas where computation can be useful are not always appreciated, but give the researchers tools with great computational power, and they will figure it out).

However, ten years ago, the humanities fields were still largely exempt. I have contemporaries who did Ph.D.s in things like English and History who actively resisted using computers, and insisted that they were not relevant to their work, and were not useful. (One person I am thinking of was a particularly brilliant researcher who is now a faculty member at a very prestigious university). These people now probably do use computers and the net, but I suspect that they still don't really enjoy it.

As I mentioned, if you meet someone now who is a graduate student in English, or is about to start a Ph.D. in English (as I did the other day), and you find the computer skills go very deep. The gap between old fogeys and the younger generation is going to become apparent here, too. This is interesting, and the humanities fields are going to be changed by this.

Michael's pet foible of the day . I loathe the expression "Information Technology" or "IT". Fundamentally, computers are for computation, and that is where there biggest impact lies. Management of information is secondary. (I will explain why this is so some other time. Promise).
This very interesting article in the Boston Globe addresses a very interesting question to which I had not previously given much thought. How is it that a religion that appears to be in no way evangelical (Hinduism) managed to become a religion of 900 million people. The gist of the article appears to be this. Basically, a single, the idea of a single homogeneous Hinduism is a relatively new: it is to some extent a reaction against Islam and Christianity, both of which have come to India and both of which are evangelical, and also a product of the institutions of the British empire, which liked and was more comfortable with the idea of a "single religion", than what actually existed in India at that time, was a very complex and varied set of mutually tolerant faiths. The empire recruited Indians from a certain elite (partly for administrative purposes, partly because it found their culture interesting) and spread a Brahmin culture that much of India was not previously familiar with. The result was also to spread a sense of India as a single nation with it. Thus, a unitary Hinduism is a recent development, and may be a force (and tool) for nationalism, the way Christianity was for the Roman empire.
Update This somehow got garbled when I first posted it. It has now been fixed.
Over at the The Eye of the Beholder, several people are discussing which Australian beer is "the best beer in the world". I think they should go to Bruges in Belgium, to a bar called 't Brugs Beertje, perhaps the most famous bar in Belgium, which has 300 different beers on tap. They should ask the barman for some help, and try a few of the best Belgian beers. The two Coopers beers that are being discussed are very nice, but as far as I am concerned, the best beer in the world in made in Belgium.
Transport Blog has a discussion of the weird, regulated world of airport landing fees and suggests that there must be some pretty weird incentives in place, without actually knowing what they are. Because I am a sad geek, I have investigated the matter at length, with the following conclusions

Okay, let's have a go.

Business travellers out of London prefer to use Heathrow, mainly because there is a greater choice of flights and connections. On routes where there are significant numbers of business travellers and there are existing flights out of Heathrow, it is virtually impossible for new entrants to the market to compete by flying to the same destination out of Gatwick or Stansted, even if the total demand for seats to that destination is increasing. This because the business class travellers continue to go to Heathrow, and it is virtually impossible for a full service airline to make money with a plane full of discounted economy passengers. As demand goes up, the existing Heathrow based carriers simply increase the proportion of business class seats in their planes and make monopoly profits, and the total number of economy seats on the route drops. This is perverse, but if competitors introduce flights from Gatwick, they are still unable to make money as the premium traffic remains at Heathrow, and they pick up only economy passengers. (It is very difficult to compete for the business traffic on price, because travellers are generally not paying for their own tickets).

Ideally what you would like is for competitors to be able to fly from Heathrow also. That way, business traffic would be split between the carriers, the existing carriers would have their margins eroded, the new carrier would be able to make money, and we have two carriers making normal profits rather than one making monopoly profits and the other losses.

Why does this not happen? Generally, new carriers are not able to fly from Heathrow, because of a mixture of outright protectionism (eg the Bermuda II treaty that restricts the UK-US market to BA, Virgin, United and American) and a shortage of slots. However, this shortage of slots is a consequence of the regulatory situation, not anything innate.

There are lots of carriers flying on small, insignificant routes, that none the less have rights to fly from Heathrow. (Normally these are national carriers of insignificant countries that gained these rights many years ago and have had them grandfathered in). In most cases, it doesn't matter if these airlines fly from Heathrow or Gatwick, because their passengers have no choice of airline and they will fly from whatever airport the flight goes from. Ideally you would want these airlines to move to Gatwick or Stansted and make way for airlines flying on more competitive and economically more important routes. There are two ways you can do this. You can increase landing charges at Heathrow (ideally to what the market will bear) but not at Gatwick, and give the smaller airlines an economic incentive to move. Alternately, you can simply allow these airlines to sell their landing slots to other airlines for large one-off payments. Neither of these things are presently permissable, and as a consequence we have a less competitive aviation market than we should. This means less choice of flights for passengers and higher fares.
Dear mythical Mr Podmore

That would be Isambard Kingdom Brunel.

Monday, December 02, 2002

Eric Raymond has a good piece which makes an obvious point that I have I have touched on before: specifically that Europe's demographics suck, and America's don't. Europe's inability to cope with this is made much worse by its large welfare state, with will act ultimately like an unfunded pension scheme.

A couple of other observations.

Japan is even worse than Europe in terms of population decline, although they don't have the welfare state that Europe does. That said, they are really screwed. They will not have ethnic riots because they are ethnically pretty homogeneous. That said, I am sure they may well find other outlets almost as unpleasant.

Normally one would expect the countries of Eastern Europe to be dynamic places, given they have only recently gained economic and political freedom. At the moment this looks true. However, their demographics suck too, and this will eventually catch up with them.

By listening to ancestral 70s lefties prophesying doom, China with its one child policy has engineered a future demographic crisis of its own. It isn't really visible at this point, because they are still feeling the momentum of a huge population boom, but in a couple of decades it is really going to hit. They cannot solve this with immigration the way the US (at least partly) has, as they are too big, but they can at least avoid constructing a welfare state.

India has much, much, much better demographics than China for the long term. If they can use all these people in an economically productive capacity, as they are showing signs of doing, the results could be extraordinary.
Henry Copeland ( via Instapundit ) comments that Google has been claiming "more than 150 million searches a day" for nearly two years, and he speculates as to why the number has not been updated. Maybe Google is more interested in other metrics (like revenue). Maybe Google is growing more slowly than (say) ebay, or maybe Google is doing spectacularly and they don't want their competitors to know and/or they want to make a spectacular announcement just before their IPO. (In the comments, a poster suggests that a fourth possibility is that they have improved their algorithms, the average number of searches per user has dropped, and they don't want their competitors to know this).

My impression of Google, is that three years ago it had been discovered only by hard core geeks, two years ago it was being used by all power users, one year ago it was the standard search tool for most people but secretaries and the like were sometimes still using other search engines, and that today its use is utterly universal. I find possibilities one and two to be inconceivable, and four to be pretty unlikely, as even if it is true, the overall usage would be growing much faster than the reduction due to better efficiency.

This does make you wonder what Google is up to.
There is a profoundly irritating article in yesterday's New York Times magazine about the rise of Ikea furniture in America, and other related issues. A lot of this actually applies to quite a few places throughout the whole world and not just to America. The point is that Ikea sells inexpensive furniture of surprisingly high quality. The effect of this has not been so much on poorer Americans as on middle class Americans, for whom furniture has changed from being a durable good into being a disposable good. The article also talks compares this with the spread of other sophisticated, stylish goods with finite lives, like cellphones, and Swatch watches and Blackberries and iPods. Cool, design oriented things with finite lives. The high quality design of these things comes of course from the PC. Whereas high quality design previously required a lot of labour, the aid of the PC means that it can now be present in everything, including low cost furniture.

Josephine Rydberg-Dumont noticed a corollary change that had similar advantages for Ikea's American experiment: an upmarketing of downmarket goods. Calvin Klein's cK T-shirts, Starbucks coffee, basketball shoes designed as if for the space program, sushi in the Grand Union -- these were tokens of conspicuous quality for a broad part of the population. ''Ten or 15 years ago, traveling in the United States, you couldn't eat well,'' she said. ''You couldn't get good coffee. Now you can get good bread in the supermarket, and people think that's normal. I like that very much. That's more important to the good life than the availability of expensive wines. That's what Ikea is about.

Okay, this is all great. People throughout the country can buy goods of a quality that people could only get in the larger cities a decade or fifteen years ago.

However, for reasons unknown the author seems then to have somehow felt the need to turn the article into a piece of class warfare.

It was a particularly good thing to be about in the 1990's, a decade in which the economic folk tales were of astronomical success (or, by the end, vertiginous falls), but the broad reality was quite different: for Americans in the middle of the wage scale, real earnings, adjusted for inflation, declined or held flat for much of the decade. Even when they were putting away a few dollars, members of the middle class were losing ground to the people to whose status they aspired, the heroes of those folk tales. The majority of Americans were participants in a zeitgeist of obscene riches without having a piece of the action.

What they could have, in just the same degree as the new economy's new rich, was the immaterial titillations of design. Design was a perfect class commodity for a class that was going nowhere. It added value to a toilet brush or a garbage pail, to say nothing of personal computers. The ubiquity of these fluid computer-generated designs suggested an attractive world of class mobility. It promised that you could be moving forward, even if your paycheck was slipping back: why, just look at your toothbrush, designed by Philippe Starck for Alessi.

So the people of America, whose real income was not increasing and whose paychecks were slipping back were suddenly able to buy a much greater variety of all sorts of goods either of better quality than they had been able to buy before. Even though people were much better off, they somehow weren't much better off. Sounds silly to me, but perhaps not to the writer.

Lets get back to the beginnings of the above quote. for Americans in the middle of the wage scale, real earnings, adjusted for inflation, declined or held flat for much of the decade. What exactly does this mean? What it means is that the dollars that people take home increased by about the same percentage in an average year as did the calculated inflation rate. (Actually, for the middle classes this is not true. Real wages in the middle brackets have increased over the last 15 years, just not as much as in the top brackets).

However, what is the inflation rate? Traditionally we take a basket of goods that typical consumers buy, and we see how the price of this basket of goods changes over time. This is fine when looking at goods and services that are the same every year: a pound of sugar, or of a kilowatt of electricity or a litre of petrol, or a haircut or a ride on the same train. However, when buying manufactured goods of any kind, it is extremely problematic. The VCR I buy today has lots more features than the one I bought ten years ago, but the basket of goods probably contains the cost of the average VCR. If people instead buy DVD players instead of VCRs, then the basket of goods can be changed and changes in DVD prices taken into account in future. However, the calculation doesn't take into account the fact that people make the change because DVD players are better. When people are buying large numbers of manufactured products with short lifetimes and rapidly changing technologies, figuring out how these can be taken into account in the inflation basket is extraordinarily difficult. However, if people are buying Apple's iWidget this year, and next year they stop buying iiWidgets and start buying Microsoft's Zbox instead, the broad conclusion we can come to is that the Zbox is a better product than the iWidget. That is, people are better off because they are buying a higher quality product. The point of this is that the calculated inflation rate becomes more and more inaccurate when goods are rapidly improving in quality and/or people the goods people are actually buying is changing rapidly. In particular, in these cases the calculated inflation rates overstate the true rates by more and more. Over the last 15 years, these conditions have been met. Our spending patterns have changed dramatically.

And this matters. If wages are increasing by 5% and the calculated inflation rate is 5%, this tells you that people are not better off. If, instead the true inflation rate is 4%, over 10 years people are 10% better off. If the true inflation rate is 3%, people are 21% better off. If the true inflation rate is 2%, people are 34% better off. If it is 1%, people are 47% better off.

The real point of this, is that we are using a single number (inflation rate) to model a complicated situation, which is the change in prices and the change in spending patterns. The longer the period of time being modelled, and the greater the change in people's spending patterns over the period, the more and more meaningless inflation numbers become. They are too crude to model what is going on. However, the Times Magazine article simply makes the assumption that because the statistically crude measure of "real wages" are flat, then people are somehow not better off. Instead, rich people are doing great, and middle class people are merely being thrown the small crumb of "design" and a better choice of goods.

The point in reality is that better design and and a better choice of goods are the way in which people are benefiting from economic growth. If I can buy myself an excellent latte in Sierra Blanca, Texas (as the permalinkless Micky Kaus recently asserted that I can), then this wider availability in actuality represents a decrease in the cost of a good latte. Why? Because previously, to have good lattes regularly I might have had to have lived in New York City, and the cost of renting somewhere to live there would be higher, and now that lattes are regularly available to me in Sierra Blanca, I can a life that includes lattes for much less than I could before. What did rich people previously spend their money on? Well, they spent it on products with better design. These are now available more cheaply and throughout the country. More people can afford lives that include these things, and as these lives can be lived in places where it is cheaper to live, then the cost of a life including them has dropped. None of this shows up in inflation statistics, but the effect is real. Thanks to computers and inventory management systems, shops throughout the country can carry much greater numbers of lines of goods.This allows people to substitute goods that they like (and which are of higher value to them) for goods that they don't like. This doesn't show up in the statistics either, but it surely means people are better off, and, if you like, richer. Ikea is an obvious manifestation of this.

The goods in my living room (and in yours) are much more sophisticated and interesting than they were ten years ago. This unequivocally means I am better off.
If you haven't yet done so, make sure you participate in International Buy Something Day
Mathew Bates points to a column from Paul Sheehan in the Sydney Morning Herald on the rise of a more fundamental version of Christianity, particularly in Africa and other parts of the third world. Sheehan has clearly just read the much more detailed (and much better) article on this subject by Philip Jenkins in the October Atlantic Monthly and is giving us the Cliff's Notes version, and is comes to a much more alarmist conclusion than Jenkins himself does. (This is typical Sheehan). There is also a very interesting follow up letters column from this month's issue. We are getting what is essentially pre-Enlightenment Christianity in much of the poor world. It's not homogeneous of course - there is pre-Enlightenment Christianity in the rich world and vice versa. The question is whether Christianity in these countries will evolve into something more modern and Western as these countries develop, and/or as people from these countries emigrate to western countries. I am personally reasonably optimistic: the Enlghtenment was essentially a Christian project and Christians worldwide are constantly in contact with its consequences (which include development, freedom and prosperity).

The problem with Islam and Islamic people in the west is that there is no such thing as post-Enlightenment Islam. To retreat into Islam is to retreat into the past almost by definition, and if you are part of a minority in the West that perceives itself as embattled then retreating into Islam is a way of emphasising the differences between you and mainstream society. None of these things are really true for fundamentalist forms of Christianity.

This is not to say that fundamentalist Christianity cannot cause problems in the west: clearly it can and does. (See, for instance, abortion clinic bombings in the US). However, the stance that "Fundamentalism is the only true Islam and if you are not a fundamentalist you are not a Muslim" is a positition that finds far more opposition and works far less well if anyone attempts to apply it to Christianity.

This is really a worry.

Jolly good, wot! Anyone for tennis? That'll be ten ponies, guv. You're the epitome of everything that is english. Yey :) Hoist that Union Jack!

How British are you?

this quiz was made by alanna

The above was all automated, and I will merely add that British people do not seem to play tennis, merely watch it for two weeks every summer . (Via Andrew Ian Dodge ).

Sunday, December 01, 2002

Australia-England cricket has been one sided since Australia won the first of its current streak of eight successive series in 1989. However, with today's ludicrously comprehensive win over England, I think the situation is approaching farce. I, and the Australian sporting public, have continued to watch the matches until now because of the history of the rivalry (frequent games since 1877) and because there is something the Australian character finds deeply appealing about beating the English at cricket , given the types of people who run English cricket and who write about it in the Telegraph. (People of my age also remember Australia being badly beaten by the English in 1981 (especially), 1985 and 1986, and still feel some desire for revenge). There have been occasional suggestions that series against England be reduced from five matches to three, which is what Australia plays against most other opponents, but there has been no action on this due to their still being so much public interest in the matches. However, I think the three matches in the last month (all three of them absolute thrashings) and (for some reason) today in particular, is the point at which it all started to become a joke. Unless something dramatic happens, my interest in watching Australia play England is going to fall off.

Mike Coward, cricket correspondent in the Australian, was suggesting a week or two ago that the Australian Cricket Board was taking the high television ratings and gate receipts for granted, and that inevitably interest would be lost in Australia-England and that cricket in Australia would then suffer. The Australian Cricket Board was doing a bad job of promoting alternate rivalries to take the place of Australia-England in the future.

The ideal rivalry is surely Australia-India, given the enthusiasm Australians have for cricket. This should have been promoted more heavily, sure, but the issue does to some extent come back to results on the field. India has great batsmen and so-so bowlers, and Indian conditions are very different to everywhere else in the world. Indian teams are invincible in India, and are incapable of winning outside India. Until this is no longer the case, this rivalry is a relatively difficult one to promote. Certainly though it is getting there from the point of view of the players. Touring India used to be perceived as a chore, but now beating India in India is seen as a holy grail for the Australian players. Current captain Steve Waugh was saying just today that this is the one challenge he still has in the game, and another tour of India is the one thing he would like to undertake. Australia do not tour for 18 months, and it is pretty unlikely Waugh will still be playing then, but it is clearly still on his mind.

Update: There's a piece by Peter Roebuck in the SMH on whether Steve Waugh will be given the chance to continue past the end of this series. (A writer in the London Times (no link because of their subscription policy) wrote this morning that this series will be "Surely Waugh's last"). Waugh clearly needs a hundred, and preferably a big one, before the end of this series in order to hold his place. That said, Roebuck makes two good points. Firstly that Waugh is presently playing well enough that he would certainly hold his place if he were younger - he has made good but not spectacular contributions with the bad in three of his last four tests. If a batsman should be dropped on form, it is Darren Lehmann. Secondly, the dropping of another senior player (Mark Waugh) has actually upset the side a little. Australia's infield catching has been below par in this series. It might be worth waiting a little bit longer before upsetting the balance again. Bearing these facts in mind, and also the fact that Ponting is relatively inexperienced as captain, plus the fact that giving Ponting the additional pressure of the test captaincy might upset his (presently superb) batting, I think I would send Steve Waugh to the West Indies. It's a fairly tricky tour, and four years ago it was Steve Waugh's first tour as test captain, and he did not cope especially well. This assumes he can get a century in Melbourne or Sydney. Really he needs it in Melbourne, as if he is not going past the end of the series, the selectors might feel the urge to give him some warning so that he can announce that Sydney is his last test. He knows all this, and he has presumably spoken to the selectors so he knows more than I do. When he has been in these sorts of pressure filled positions in the past, he has always risen to them. We shall see if he does this time too.

Blog Archive