It’s not just Australia that has fucked internet

Here’s an all too common example of political incompetence and bungling when it comes to technology and infrastructure planning.

And I thought it was just the NBN that had issues – at least we’ve got something rolling (albeit limping) out…

Of course The Backburner nails it in a piece that you’d be forgiven for thinking was all too real:

The Turnbull Government has announced that the delays in rolling out the NBN and limits on download speeds were all along part of a calculated effort to somehow slow down Australia’s pirating of Game of Thrones.

Telstra Pricing

I’ve been in the US for the last two weeks. Before I left I purchased a Telstra 14 Day Travel Pass. It worked out well, because it ran out when I was at the airport about to head home.

Here’s the text message I got (the second message is the item of interest):

Telstra notification

Basically, since I was a Travel Pass customer I was going to now pay 3c per MB. At this price I’m assuming Telstra still makes some profit.

Which means that if you are a Data Pack customer, and thus forced to pay $3 per MB (ie 100 times as much as a Travel Pass customer) then Telstra is well and truly fucking you over.


Twitter: The poster child of technology inefficiency

You should read this post from Eugene Wei (no really, go read it) and consider it in terms of inefficiency.

Think back to when Twitter first started. By the time you and I were getting involved, it’s likely the need for the SMS limited character limitation was long gone. Only the earliest of early adopters would have actually been around when it was in fact needed.

But the character limitation was there, which was why a whole bunch of work-arounds started appearing. The first was the url shortener movement.

Result: Twitter was imposing unnecessary barriers that other companies (eg then created tools (and entire businesses) to overcome. Here’s what I said about it in 2009. So inefficient. And costly.

Url shorteners went on to make up for some of the inefficiency by enabling click analysis and reporting, so it wasn’t all bad of course. But it doesn’t necessarily mean it was the right path (since reporting could have been implemented in other ways, and much more efficiently).

And that brings us back to Eugene’s article where he so beautifully paints[1] the picture of an inefficient product going from bad to worse.

The inefficiency highlights so far:

  • impose length limitation and keep it (for no good reason)
  • URL shorteners become popular to allow long links to be included in tweets
  • tweet storms used as a way to write longer updates
  • messy reply threading not understood by most people
  • screenshorts now becoming common as another way of writing longer updates

Soooo inefficient.

[1] Note: Eugene’s post, whilst certainly discussing the merits of potentially removing the character limit, isn’t necessarily focussed on that – he has other key points around the larger picture of Twitter’s strategy that he covers really well.

Finally using an ad blocker

I’ve resisted using an ad blocker for years, since:

  1. I don’t mind ads
    Especially if they are personalised (as most ads are now). And I’m happy for ads to track me all over the web if it means I get a better ‘ad experience’
  2. I realise many sites rely on ads as their business model
    If it weren’t for them showing ads I wouldn’t get access to many of the useful resources I currently get for free.

But I can’t really resist any longer. The reason: performance

In part, triggered by this post on Daring Fireball, I installed AdBlock (in Chrome) just to see how much of a difference it made to performance.

In a word: heaps

I was kinda blown away by just how much faster everything is when ads are blocked. It’s significant.

Robots are starting to break the law

As part of an art exhibition in Zurich, an automated online shopping bot is tasked with buying a random item each week on the deep web to the value of $100 in bitcoins.

Along the way it purchases ecstasy pills and a fake passport.

If this bot was shipping to the U.S., asks Forbes contributor and University of Washington law professor contributor Ryan Calo, who would be legally responsible for purchasing the goodies? The coders? Or the bot itself?

Interesting reading, especially as it relates to ‘art’ and the legal immunity art may provide.

Bill Gates interview in Rolling Stone

Hopefully you’ve already seen this wonderful interview with Bill Gates in the latest issue of Rolling Stone. If not, then it’s well worth a read.

Top marks to the interviewer – a great bunch of questions (and so much better than the cringe-worthy questions Bill’s had to endure in the past). As always Bill is such a clear and useful thinker who answers so eloquently.

I’m tempted to include a few of the choice quotes here, but to do that I’d basically have to post the entire interview – every answer is good. So just go read it.

And when you’re finished there, make sure you read (or re-read) his recent Gates Notes letter (from early Feb). I’m really thankful to that for how it changed my understanding of Foreign Aid.

Virtualisation Smackdown next Wed 26 August

I’m pretty excited, I gotta say, about this month’s Sydney Business & Technology User Group meeting – we’re having a Virtualisation Smackdown! It’s this coming Wednesday, starting at 6pm.

Here’s the details: SBTUG - High level clarity

  • Date: Wed 26 August 2009
  • Time: 6pm – 9pm
  • Location: Microsoft, North Ryde (map)
  • Web:
  • Phone: 0413 489 388 (call me if you get there after 6pm and need to get it)
  • RSVP on Facebook (or email me)

Smackdown, Schhhmackdown!

What this means – for the smackdown uninitiated – is that we have a panel of speakers (6 in our case) each present on a different virtualisation product. We’ve got Citrix coming along, VMware, and Microsoft each presenting on a few of their products. The smackdown term came to be used because… well, actually I’m really sure why it is used – it just sounds good I guess…

I think the concept is along the lines of them going ‘head-to-head’ in a contest to demonstrate the ultimate virtualisation champion! Or some such.

Who’s the night for?

Virtualisation is a big field, and there are literally hundreds of offerings out there. We can’t possibly cover it all. So, instead we are focussing on the big 3 vendors, with the aim of providing an understanding of what virtualisation is, and the benefits. Plus a chance to understand what each of the main players offer.

Citrix VMware Microsoft

We’ll be covering developer, IT Pro and business owner scenarios, and as per the SBTUG matra, our aim is to provide ‘high level clarity’. You may not come out with a technical understanding of how to configure a loopback network adapter on your chosen platform – but you will understand which platform to look at if you need to say run 64 bit programs in a virtualised environment as a developer (<- that’s just one example).


We’ve been very lucky with our speaker line-up (and a big thank you to Kathy Hughes for organizing this). Here’s who’s going to be ‘on stage’:

  • Kathy Hughes – Microsoft SharePoint MVP – covering an overview of virtualisation, plus the developer scenario
  • Steven Gross – VMware Asia-Pacific Product Manager – presenting on VMware ESX Server
  • Dino Soepono – Citrix – presenting on XenServer
  • Scott Lindsay – Citrix – presenting on XenServer with Dino
  • Jeff Alexander – Microsoft IT Pro Evangelist – presenting on the Microsoft Strategy and Hyper-V
  • Nick Rayner – Windows User Group Leader – presenting on Virtual PC and Virtual Server

I don’t have proper publicity photos of our presenters for the night, so the following ‘likeness’ will have to do:

This is not what the presenters look like

(Image source:

Actually, perhaps its more akin to a body building competition than wrestling. In my mind, it’s not so much about fighting each other, but rather presenting the muscles of your technology. I’ve asked the speakers to focus on demonstrating their strengths (as opposed to criticising their competition).

In fact, I’ll be asking the crowd to boo any speakers who take cheap shots at their rivals :-). This will be a no vendor bashing zone!

Format for the night

The format is pretty simple. Each presenter has 15 minutes to highlight the features and benefits of their product and outline the usual usage scenario (ie a desktop tool has different uses to a server product). Plus 5 minutes for questions. Given that we have an intro + 5 slots, I’ll be pretty strict with the time.

At the end of the night we’ll be having a 30 minute panel session with all speakers up the front, taking questions from the crowd. And of course feel free to hang around at the end, network, ask more questions of the speakers etc.

Here’s the agenda:

6:00pm : Welcome + News: Craig BaileyAre you a Virtual Geek - a Veek ?
6:15pm : Introduction to Virtualisation: Kathy Hughes
6:30pm : Virtualization for Developers with VMware Workstation: Kathy Hughes
6:50pm : VMware ESX Server: Steven Gross from VMware
7:10pm : Hyper-V + Microsoft strategy : Jeff Alexander from Microsoft
7:30pm : Pizza
7:45pm : XenServer + Citrix strategy: Dino Soepono and Scott Lindsay from Citrix
8:05pm : Virtual PC + Virtual Server: Nick Rayner
8:30pm : Panel session – ask the panel any virtualisation questions
9:00pm : Finish

Plus we have some great prizes – including a special-edition VEEK T-Shirt (a big thank you to Jon Harsem for the design) for one lucky attendee. These shirts have been specially designed and printed just for the night!


I ask for a $5 donation from attendees to cover the cost of pizza. Donating is optional of course, but I’ve noticed that the people who do donate are far and away more attractive, of higher intelligence, and the most interesting to be around. I suspect it is a causal relationship…

See you there

It’s going to be a big night. Make sure you’re there. Either RSVP on our Facebook event page, or send me an email to let me know you’ll be there.

Please spread the word.

We start at 6pm sharp!

Microsoft iPhone Apps

Seadragon - Microsoft Live LabsAs Mashable and TechCrunch report, Microsoft is testing the waters with the iPhone App market, releasing a a little app: Seadragon Mobile (here’s the official announcement on the Live Labs blog). The app (available for free on iTunes App Store, allows you to browse Deep Zoom images effortlessly. Check out the simple 42 second demo on the blog. And for more on Seadragon itself, check this out.

Technorati Tags: ,,

Parallel Computing

Boring history prologue

I started playing with ‘computers’ back in the days when they came with 3K of memory (Vic 20 anyone?). And thank goodness I was too young to have experienced the punch card era… They quickly scaled up so that by the time I was at uni, 16MB of RAM was becoming standard. Fast forward to today and we can buy consumer notebooks with 16GB of RAM. Never in our wildest uni dreams did we think we’d have Gigabytes of memory to play with.

Today’s equivalent

Perhaps the equivalent of my uni experience for today’s student is the number of cores in a machine. In the future they’ll look back nostalgically at the days of having only 2 cores in their machines… ahhh those were the days – how did we ever survive with so few?

The reality of course is that we are already well down the path of multi-core CPUs. 4 cores is normal, 8, 16 and 32 are almost in the consumer space, and it won’t be long before talking about having thousands of cores is normal. Yes, thousands of cores.

That’s progress right? So, why is it of interest?

Why is this interesting?

The reason parallelism is interesting is because it solves the problem of ‘the next biggest bottleneck’. Performance gains are simply a process of finding the biggest bottleneck, reducing it, and moving on to the next biggest bottleneck. It’s a continuous cycle.

As CPUs have increased in performance, computational and graphic bottlenecks have been drastically reduced. But CPU speed has started to reach a limit. Tasks are queuing up waiting for CPU cycles. How do we services those tasks if the CPU is running at it’s maximum speed? Answer: we start adding the ability to provide more cycles. How do we do that? Simple – We add more cores. Problem solved. 

Well, not quite. Because if tasks are queuing up, more cores just allows us to service more tasks in the same time. What we really want is to get through a single task quicker. This is where parallelism comes in. We want to be able to break a single task down into chunks that can be processed by multiple cores. But how on earth do we do that, with all its breaking apart, management and re-connecting once processed?

And that’s why this is such an interesting problem.


Before I go any further let me recommend three excellent resources (which I’ve pulled most of the thoughts in this post from).

First up, have a read of this MSDN Magazine article on Parallelism in Visual Studio 2010. This article briefly describes the problem, plus highlights code samples of how the problem is approached. IT finishes with some details about the debugging and profiling tools for parallelism in Visual Studio 2010.

Next, have a listen to .Net Rocks episode 375 with Steve Teixeira. I listened to this entire episode twice. First to get an introduction, and second to make sense of the context they were talking about. This is one of the better episodes in the last few months in my opinion. Steve is extremely eloquent at expressing the problem space, and Microsoft vision, in simple terms.

Last, take a look at this webcast on the Parallel Computing Platform team’s chat about parallelism, and how they approach it in Visual Studio. Steve features in this, and Daniel Moth demonstrates some of the new VS tooling (mentioned in the MSDN article above).

Oh, and you’ll probably also want to subscribe to the Parallel Programming blog.

Parallelism in context

Parallelism is not a new subject. It’s been around for years and is an important component in High Performance Computing (HPC).

The reason it is now a mainstream topic (ie not just lost in the dark halls of academia) is because it now affects even the most most basic consumer. It has moved from the server room to the home user desktop. You’d actually have trouble trying to find a computer with a single core these days.

Is parallelism my problem?

This is a fair question. After all, the hardware vendors have been tackling this issue for a while. And they’ve started moving up the chain, with companies like Intel actively engaging in the software side of parallelism running on their chipsets.

Surely the problem of dealing with parallelism needs to lie at the lowest level, where the framework, operating system and drivers closest to the cores need to do the hard work of determining what gets processed where.

This has traditionally been my view. As a programmer I’ve long held that I should be able to write my code focussing on the business problem, and the framework (and OS) should take care of the technical issues (like maximising performance). And in many regards that is fine.

Sure, the framework can take some of the burden, but what about when something goes wrong? How do you debug code that gets parallelised, if you don’t even understand what parallelism is?

And that’s why it has finally dawned on me that parallelism is every programmer’s problem. We need to understand the basics of parallelism, in the same way we need to understand multi-threading, web state, and unit testing. We need to be able to understand when and where it is applicable, the appropriate patterns to follow and the methodologies in our companies to best apply it.

Parallel computing is a significant mind shift for programmers. It’s not taught (much) at universities, and it certainly isn’t marketed as a ‘sexy’ side to development. The toolset to date has been almost non-existent, and it’s no surprise there are hardly any applications written with parallelism in mind.

Also, consider it from a commercial perspective. The developers and companies who ignore parallelism will quickly find themselves at a distinct competitive disadvantage to those who embrace and design their offerings with parallelism in mind.

Visual Studio 2010 and .NET 4.0

Over at the MSDN Parallel Computing Developer Center the Parallel Extensions to the .NET Framework 3.5 are available as a CTP. This was first released back in November 2007, and the latest release is the June 2008 CTP. Installing this allows you to add some parallelism calls in your .NET code. Parallel LINQ (PLINQ) is one manifestation, whereby you can add .AsParallel() on your LINQ queries to parallelize them.

[As an aside, all the help sample code is in C#, so parallelism is obviously not intended to be of interest to any VB devs out there :-) ]

The exciting news is that all this parallelism stuff is being greatly enhanced & baked into .NET 4.0, and Visual Studio 2010 is adding significant tooling to allow it be coded, profiled and debugged easily. It will also include clarifying the terminology (eg understanding the difference between threads and tasks).

Much of the goodness will be shown at PDC this week.

Parallelism at PDC

I’ll be interested to see how the topic of parallelism gets reported from PDC. There’s at least 9 sessions on the agenda, so Microsoft is certainly giving it some attention. But what will the punters think?

Parallelism at PDC


Obviously parallelism applies to computational tasks (ie it doesn’t solve the problems of disk I/O and network latency for example).

You might think the obvious place for parallelism is in tackling time consuming tasks. And you may be right. Rendering video would be a good example. Typically this is a long task, and having a thousand cores render out your animation (or special effects or whatever) would be a big boost.

But this only the start. Big gains are also available in small sizes. Consider intensive in memory data manipulation (eg LINQ to Objects). It may only be a 2 second task, but if you could break that into parts and have 1000 cores perform the analysis you may find it completed in a millisecond or two. Not much difference to the user experience on its own, but when combined with numerous other activities, it very quickly starts to add up. Imagine if every single background task on your machine was simply broken down and dealt with by a thousand cores. Imagine a day when the only waiting time is the human interaction.

The next bottleneck

Let’s jump ahead a few years and assume parallel computing is understood and practiced by all. What’s the next bottleneck? Assuming we can process data in parallel, the bottleneck is likely to be in physically moving the data around. The inherent latency in moving a terabyte from my machine in Sydney to yours in New York is going to be an interesting one to solve.

TECHED: Lock note – Predicting the next 10 years in IT

Easily the best TechEd lock note I’ve seen (but then again I’ve only been to TechEd 4 times).

Miha Kralj talked us through the technology changes we’ll be seeing over the next few years.

If you get a chance to see his presentation (I’m sure it will be repeated at other events, or put up on a video site somewhere) make sure you do – it is well worth it.

It’s difficult to do the session justice, so I’ll just post a few info-bytes and comments he made.


What went wrong at IBM?IBM

Having previously worked at IBM (and through the problems they went through) he is now a senior architect at Microsoft, based in Redmond. Looking at IBM he wondered why – with great people, and great technology, and having had great success – what went wrong?

Ka ching!

His answer: they had a cash cow which couldn’t be messed with. That sacred cash cow was the mainframe.

His solution: You need to render your cash cow redundant before someone else does.


Unstoppable trends

Baby boomers: are ageing… 80% of wealth is in the hands of baby boomers. Digital immigrants are losing out to digital natives. The new users want devices that know where they are, what they want and what they need (how to connect will be a given).

Less vendors: There are fewer major technology vendors. Example: server class computing vendors. 24 major vendors in 1997. 14 in 2000, 10 in 2004 and only 6 in 2008 for building server class computing.

Data centre Server consumption: More that one third of all servers on the planet are being consumed by Microsoft, Yahoo and Google. They buy servers by the container. There are only 3 connections: network, water (for cooling) and power. And they don’t even get a key to the container. The containers don’t even get any attention unless the availability of the servers inside drops below 95%. Cooling is 2/3 of all the power required to power a container.

Cooling: Data centres are being located in cooler areas (Eg Chicago and Siberia) in order to minimise cooling costs. The data-centre is now controlled via coolmaps (ie the opposite of heatmaps) whereby the SQL Servers are ‘lifted’ and put onto the servers that are in the cooler corners of the data centre (remember that everything is virtualised).

Environment: The carbon footprint of IT now matches aviation. 



Tuvalu’s biggest export is its domain name.

Nigeria’s second biggest money source is internet scams.

It is estimated that 50% of dating and relationship commencement in the US is online.


Understanding ‘… as a Service’

A taxi is ‘Car as a Service’ and it means that the questions we ask are different. eg we don’t check the make of taxi, or whether it is the colour of our preference. Rather, we check whether it is clean, available for hire, and big enough for our purposes.

Vendors are changing to be Providers. We will no longer sell technology, we will sell services.

  • Cloud enablers: Virtualise, Provision, Secure
  • Delivery models: SaaS RTI, Premises
  • Core services: Search, Pay, Compute, Store
  • Consumer services: Play Shop Love Help

The great migration: Don’t focus on solutions, focus on protocols – making your application available from anywhere.


The next web

The next web will be a mix of digital and real

  • First web: What can I find?
  • Second web: What can I contribute?
  • The next web: What do I need here and now?

The core issues will be Privacy and Trust


Closing thoughts from Miha

Miha closed with a few pointers to how our world is changing:

Age, location and even language don’t matter any more.

China produces half a million English speaking IT graduates every year (the US only produces 100,000)

The leadership of the future: will be based on meritocracy (not democracy)

Events will be digital and won’t require physical attendance.


My opinion

This TechEd brought together a few key concepts for me. Based on sessions I attended (including the lock-note), people I chatted with and things I read, two ideas really struck me:


The server room is dead

If you are a small to medium size company, then take a look around your server room – because it won’t be there much longer. Soon (eg within 10 years, and probably much less) there will be no such thing as a company server room. Everything will be outsourced to hosting companies. Your email, your databases, your file servers, your communications servers, your LOB servers, everything will be in the cloud. It will be quicker, more secure, and infinitely more scalable.

Aside: Australia will stand particularly exposed if our internet infrastructure can’t progress to support the bandwidth required.


The web is the future

I know this must sound totally obvious (and I even work for a web company) but it struck  me more than ever that we are just entering the web era. We are at the beginning. All the social networking and e-The web is the futurecommerce sites of the last decade are just a pre-cursor to what is to come. People still don’t get the web. Whilst to many individuals and businesses the web is still optional or a part-time involvement, the future will be an always connected and integrated part of our lives. In business you need to understand both the opportunity and the threat this brings.

Aside: In the Microsoft camp, most people haven’t yet woken up to what Silverlight and Mesh are going to bring (people still haven’t got past the fancy graphics and 5GB of free storage…).


Final thought

I’ve never really been a Ray Ozzie fan, but as a result of this last week I finally got what his vision is. Microsoft has a number of things to sort out along the way of course, but they’ve been putting the plumbing in place for a while now, and sleepy heads like me are finally starting to catch on…