Archive for March, 2009

Orlando

Phil Fersht:

Orlando is my version of a very, very bad dream: a world where you can actually buy a fake Guinness in a fake Irish pub, and get stuck behind entire families in lengthy queues where the kids start at 220lbs… you never normally ever see people like this, but somehow Orlando acts as a magnet for over-sized, under-cultured plasticity.  Seriously, why bother with Guantanamo for interrogations? Just lock suspects in Epcot for a couple of days and we’ll find out who killed JFK, which Ritz-Carlton Osama Bin Laden resides in these days, and even where Bernie stashed his $50 billion…

Could no agree more…

Dell Inspiron Mini 10

Na, das Dell Mini 10 rockt doch nicht so, wie ich gehofft hatte. Der 3-Zellen Akku ist ein grosser Minuspunkt, gerade wenn man andere Maschinen mit 6-Zellen Akkus jetzt schon kaufen kann:

Dell Inspiron Mini 10 – A Review of the Dell Inspiron Mini 10

(via)

Finally: Business-IT alignment is dead

Joe McKendrick nails it on ZDNet: ‘Business-IT alignment’ is dead… whatever it was

Long the subject of countless articles, blogs, and seminars: Do IT folks “get” the business? How do we achieve “business-IT alignment”?
Perhaps its time to put this tired argument to rest. IT folks not only “get” the business, they are the business.

But perhaps the time has come to stop talking about “alignment” as if the business and IT were separate organizations. They are one.

What makes a weblog a weblog?

Oldie, but Goldie. By Dave Winer. From 2003, but still fresh.

Rather than saying “I know it when I see it” I wanted to list all the known features of weblog software, but more important, get to the heart of what a weblog is, and how a weblog is different from a Wiki, or a news site managed with software like Vignette or Interwoven. I draw from my experience developing and using weblog software (Manila, Radio UserLand) and using competitive products such as Blogger and Movable Type.

Pipes in Text

A few days ago, Stefan joined a rant of Peter Williams on Yahoo! Pipes lack of a text representation:

While for many models (and programs, and anything in between) having a visual representation is nice when you want to read (or view) it, visual authoring sucks in the vast majority of cases. Sadly, being able to efficiently edit something in a text editor, with versioning and diff support and so on is in general not what impresses those who make purchasing decisions.

This is so true.

One more day, and he links to a great presentation on Pipes and the Y! Query Language. Says Abel Avram in the post that directs to the presentation by Jonathan Trevor , captured at QCon SF 2008 (53 mins):

Yahoo Pipes uses a visual tool to specify a series of pipes through which input data flows and is filtered and processed in various ways. There are many data sources like personal data, CSV, feeds, web pages, services like Flick, Yahoo Search, and others. Processing is done through operators like: Filter, Loop, Regex, Sort, Union, and others. The result is shown in a web page, a feed, or given to an application. The service runs in Yahoo’s cloud, is free and does not offer security protection for sensitive data since anyone can copy and run a pipe made by someone else.

YQL is similar to Pipes but uses a textual textual language and can process both Yahoo web services data and any structured data with an URL. It has an SQL like syntax with three statements: SELECT, SHOW, DESC. This approach is more powerful and the applications created are protected.

Thanks, this is what a lot of people are looking for.

Primary Aluminium Production Cost

Metal Miner with two excellent posts:

  1. Power Costs in the Production of Primary Aluminum
  2. Cost Build Up Model for Primary Aluminum Ingot Production

Now I would like to see this continued. A cost model for remelt Aluminium Ingot would be nice (and should not be too difficult), more towards cost structures in downstream Aluminium manufacturing, whether we talk about Rolling, Extrusion, Casting.

Guys, go on. You rock!

Recipe for Disaster

Felix Salmon in Wired 17.03: Recipe for Disaster: The Formula That Killed Wall Street:

For five years, Li’s formula, known as a Gaussian copula function, looked like an unambiguously positive breakthrough, a piece of financial technology that allowed hugely complex risks to be modeled with more ease and accuracy than ever before. With his brilliant spark of mathematical legerdemain, Li made it possible for traders to sell vast quantities of new securities, expanding financial markets to unimaginable levels.

His method was adopted by everybody from bond investors and Wall Street banks to ratings agencies and regulators. And it became so deeply entrenched—and was making people so much money—that warnings about its limitations were largely ignored.

Then the model fell apart. Cracks started appearing early on, when financial markets began behaving in ways that users of Li’s formula hadn’t expected. The cracks became full-fledged canyons in 2008—when ruptures in the financial system’s foundation swallowed up trillions of dollars and put the survival of the global banking system in serious peril.

The damage was foreseeable and, in fact, foreseen. In 1998, before Li had even invented his copula function, Paul Wilmott wrote that “the correlations between financial quantities are notoriously unstable.” Wilmott, a quantitative-finance consultant and lecturer, argued that no theory should be built on such unpredictable parameters. And he wasn’t alone. During the boom years, everybody could reel off reasons why the Gaussian copula function wasn’t perfect. Li’s approach made no allowance for unpredictability: It assumed that correlation was a constant rather than something mercurial. Investment banks would regularly phone Stanford’s Duffie and ask him to come in and talk to them about exactly what Li’s copula was. Every time, he would warn them that it was not suitable for use in risk management or valuation.

…In finance, you can never reduce risk outright; you can only try to set up a market in which people who don’t want risk sell it to those who do. But in the CDO market, people used the Gaussian copula model to convince themselves they didn’t have any risk at all, when in fact they just didn’t have any risk 99 percent of the time. The other 1 percent of the time they blew up. Those explosions may have been rare, but they could destroy all previous gains, and then some.

In the world of finance, too many quants see only the numbers before them and forget about the concrete reality the figures are supposed to represent. They think they can model just a few years’ worth of data and come up with probabilities for things that may happen only once every 10,000 years. Then people invest on the basis of those probabilities, without stopping to wonder whether the numbers make any sense at all.

As Li himself said of his own model: “The most dangerous part is when people believe everything coming out of it.”

One learning: always read the fine print, folks.

Cloud Computing Futures by Microsoft Research

Microsoft Research investigating Cloud Computing Futures:

To create novel data center solutions, designs must be based on comprehensive optimization of all attributes, rather than gradually accruing incremental changes based on current technologies and best practices. The Cloud Computing Futures team is tasked to invent on a large scale. Our goal is to reduce data center costs by four-fold or greater, including power consumption, while accelerating deployment and increasing adaptability and resilience to failures.

Great claim. Let´s see what Microsoft brings along. (via)

Google App Engine – out of Beta

O´Reilly Radar: Google App Engine Lets Your Web App Grow Up

After today developers can pay to have more storage, more bandwidth, more CPU time and send more email. The costs as of this morning are listed below with a comparison to the AWS equivalent cost.

• 10 cents per cpu core hour (AWS charges $.10/hr for a small, standard Linux instance and up to $1.20/hr for an XL, Hi-CPU Windows instance in EC2)
• $.10 per gigabyte transferred into AE (AWS charges $.10 for all data transferred into S3)
• $.12 per gigabyte transferred out of AE (AWS charges $.17 for the first 10 TB/month transferred out ofS3)
• $.15 per gigabyte stored per month (AWS charges $.15 for the first 50 TB/month stored onS3)
• .0001 dollars per email (AWS does not have an equivalent)

A huge concern with App Engine is platform lock-in. Google provides a lot of powerful, but non-standard APIs and features that make switching platforms difficult. Developers can extract themselves from App Engine via projects like AppDrop, but it is still risky to use their platform without an SLA. Without a guarantee Google could theoretically decide to raise prices unreasonably. Is it likely? No, but it is something that developers need to think about before committing to any platform.

This may be a showstopper, at least for some. The lock-in is imho much stronger than for example with AWS. So choose wisely…

App Engine quotas here.

And Dare asks Is Google App Engine the wrong product for the market?

Below are the two categories of people I surmised would be interested in spending their hard earned cash on a book about cloud computing platforms

  1. Enterprise developers looking to cut costs of running their own IT infrastructure by porting existing apps or writing new apps.
  2. Web developers looking to build new applications who are interested in leveraging a high performance infrastructure without having to build their own.

As I pondered this list it occurred to me that neither of these groups is well served by Google App Engine.

Given the current economy, an attractive thing to enterprises will be reducing the operating costs of their current internal applications as well as eliminating significant capital expenditure on new applications. The promise of cloud computing is that they can get both. The cloud computing vendor manages the cloud so you no longer need the ongoing expense of your own IT staff to maintain servers. You also don’t need to make significant up-front payments to buy servers and software if you can pay as you go on someone else’s cloud instead. Google App Engine fails the test as a way to port existing applications because it is a proprietary application platform that is incompatible with pre-existing application platforms.

Kindle 2

xkcd.com:

I had the same thought.

Peter Kafka in Media Memo on All Things Digital commenting on Jeff Bezos pitch lately in “The Daily Show” with host Jon Stewart:

That is: For some folks, the ability to download books over the air, store a gazillion titles on a single device and have a “freaky” voice read them aloud to you are compelling reasons to shell out $359 for the gadget. For skeptics like Stewart, it’s hard to see how Amazon (AMZN) has improved upon the ink-and-paper book, which uses technology that has worked pretty well for several hundred years.

And cnet Crave on Designing the Kindle 2:

“One of the great things about Kindle is it doesn’t ever get hot,” Amazon Vice President Ian Freed said in an interview at Amazon’s downtown office here. That’s important, Freed said, given that the company has one main goal with the Kindle–making the product as invisible to users as possible when they are reading.

“The most important thing for the Kindle to do is to disappear,” Freed said. That was the goal with the first device and was also a key factor in deciding what would go in the sequel, which started shipping on Monday. There are the obvious factors, like the thinner, sleeker design. But there are also things like an improved cellular modem. As a result, Kindle users will find themselves out of range in fewer places to get updates or buy a new book.

Well, for us Europeans it is anyway not yet available. I will have a look at it, when it comes over, but for the time being I like my dead tree library.