I confess I agree - to a level. When governments legislate in the face of new advancements they often over-react, and sometimes attempt to assert centralised government control where it's not needed or warranted. I write this with a strong belief that national governments will do a pretty crap job if they're left to draft and enact legislation through existing parliamentary structures.
But in a domain where the fittest - and largest - survive, who's left batting for the consumer? And importantly, who's funding the organisation batting for the consumer?
The domain is complex, and, like other complex areas lawmakers - MPs and civil servants - will be looking to outside guidance. In the absence of an established, recognised and adequately funded policy research institution, the loudest voices heard will be those of the industry lobbyists defending the business models of their paymasters.
Given the number of stories I see inserted into the press about the threat from terrorists (real and cyber) using the internet to co-ordinate attacks, one can also assume there will be the loud voice of security apparatchiks claiming civil rights need to be sacrificed in the name of national protection.
Even the Guardian inserted a scare element to an otherwise worthy story, claiming Chinese readers were "stealing" ebook downloads "loaned" from British libraries in order to "circumvent copyright laws". Quite what relevance the Chinese link has - aside from it's one of the countries frequently mentioned in association with cyber terrorism - when most ebooks appear on file-sharing websites within hours of their release (sometimes even before their release) anyhow, entirely unconnected to loans from British libraries, is beyond me.
Playing to our inherent fears and insecurities is symptomatic of governments, organisations and media, especially when arguing for stricter controls - in this case over electronic media and digital copyright protection.
Stretching national boundaries
Putting fear, terrorism and copyright controls aside, we are now firmly in an era characterised by the fluidity of data, yet our laws still defer to national boundaries.
A case in point is the variation across nations in the strength and effectiveness of data protection laws. All European countries have laws designed to protect the consumer when they hand personal and often sensitive data over to private companies. EU law not only limits how this data can be shared with other companies, but also where in the world the data may be "exported" to or stored.
So-called "cloud computing" services, where the practicalities of managing physical servers, and indeed the geographic location of said servers and the data stored upon them, is delegated entirely to the service provider, bring unique data protection compliance issues for the data controller of any EU company putting customer data into "the cloud".
But what we see now is only the beginning of a problem that's about to get a lot more complex.
Today many companies apply the word "cloud" as a marketing ploy to tempt customers into using their server farms. In reality they end up selling access to physical servers running standard operating systems, or a virtual server on a moderate-sized cluster sat in one physical location firmly inside one national jurisdiction, meaning customers with enough buying power are still able to specify where in the world their data resides.
But I believe at least one company - Google - has developed a genuine cloud computer. Perhaps the world's first true globally-distributed "gigacomputer" (my tagline). A processing fabric acting to all intents and purposes as a single computer would, but whose physical processing and storage is genuinely abstracted from the programmer or end user, and distributed around the world.
In short Google's cloud offering, the App Engine, acts like no other computer on earth. It takes a processing request, fires it off into the cloud where it will find its way somehow to an idling lump of silicon CPU somewhere in the world, process the request, and fire the response back to the end user. From what I've learned so far, each request from a series of 10 issued by the same user may end up being processed in a different country, and similarly data stored in a database may end up distributed across a dozen national boundaries.
Naturally Google are coy about what they've actually developed to power their cloud-based services, but given my professional experience in this area I'm inclined to believe they may already have reached the computing holy grail of a near-infinitely scalable distributed computing fabric.
But if they have, it also follows that Google, as the operator, aside from choosing where to build its data centres, has very little - if any - control over where in the world data will end up being stored. In fact it's most likely to be stored in several locations, simultaneously, as the traditional concepts of memory and [disk] storage blend together in a true cloud.
In the cloud, data is stored in enough places to overcome the likelihood of failure of the storage medium, whether that be dynamic memory, non-volatile memory or traditional hard disk. Replication is increased in order to overcome bandwidth bottlenecks and storage is chosen by the fabric and within the fabric to maximise storage capacity whilst minimising latency (the time taken to access the data when needed). That's all the programmer knows or indeed cares about when building services running in a true cloud, and the algorithms dictating storage are unlikely to take national boundaries into account.
That's not to say that Google aren't taking data protection and issues of jurisdiction extremely seriously. A well-placed source outlined some of the technologies they were working on to make their global gigacomputer suitable for sensitive processing tasks.
"One of the things I talk about quite a lot is the need for data to be 'tagged' with metadata indicating how it should be handled - e.g. 'this is medical data that must be treated according to UK law'
"Any cloud infrastructure can then make appropriate decisions akin to how router move traffic according to type today.
"We're not there yet on an international standard for such a data taxonomy. I am told some industries (such as insurance) do have a good internal taxonomy but what we need is a true global cloud standard for the classification of data."
With issues of privacy, trust and data protection appearing higher than ever on the public and corporate agenda when (if it's not already with us) the world's first global computing fabric is launched, the concept of national boundaries will never have been less relevant in the digital domain, leaving one pertinent question: who ultimately is the responsible party - or government - in charge?
Funding the enemy
When a handful of private companies hold the reigns to the world's data, the issues are monumental and broad-ranging, covering free speech, censorship, copyright protection, sub judice as well as data protection and privacy.
Data protection itself extends to the vast amount of data incidentally generated daily by each of us, eg. each time we make a phone call or send a text our exact location is pinpointed and stored by our phone company for both legal compliance and - in at least one case - for soon-to-be-launched geographic advertising services. Companies we chose to trust literally know where we are for most of our lives.
The new jurisdictional problems created by global services also extend into law-enforcement and national security, as the recent spats between national governments and the Canadian maker of the BlackBerry over access to encrypted private communications demonstrates.
Policy research and advocacy work is badly needed to ensure a balance is struck in the three-way mash of the interests of the consumer, the corporation and the nation, yet there's no clear answer as to how this can be funded to achieve a largely impartial and untainted outcome.
Industry bodies such as the Interactive Advertising Bureau have raised related issues in an attempt to increase consumer confidence in the global market for personalised internet advertising, which ultimately relies on the trade of personal profiles - a list of interests and likes of individual net users - in order to display adverts relevant to their tastes.
Yet campaign groups tell me they're sceptical of such approaches, fearing groups like the IAB have opted for self-protection over self-regulation, and act as a body promoting the business interests of its members - the advertisers - and not the consumer.
The concept of funding ones enemy may be counter-intuitive. Barmy even, to some organisations, who see digital rights activists as an unworthy and unnecessary distraction from the business of offering innovative new services that will improve our everyday lives (as well as monetising their digital offering whilst protecting more traditional revenue streams).
Yet some organisations do at least see the benefit in having a much-needed open and honest debate in public now, rather than burying their head whilst continuing in their own interest and hoping no-one notices. They understand that delaying debate may actually be bad for business. Whilst critical reports may dent consumer confidence in the short term they may lead to a stable environment for long-term growth. Festering problems left untreated for too long may be difficult to tackle when they eventually manifest themselves as a debilitating disease.
But unilateral funding of policy research by any single organisation is a non-starter for two reasons. It's a given that consumer confidence would likely remain unmoved by "research, funded by Microsoft, shows that Microsoft is brilliant at privacy and data protection." (Which of course they might actually be, so don't sue!)
A less publicised problem is the reluctance of a campaign organisation to accept money from potential adversaries. Several have tried this approach, feeling more can be achieved working with an organisation that fighting against it. But in at least one case the organisation's reputation was almost destroyed by media fall-out, and in another instance funding was abruptly pulled after it became clear the organisation in question was coming to some pretty damning conclusions.
Before anyone mentions government funding, let me cite (i) the budget deficit; and, (ii) the Advisory Council for the Misuse of Drugs. The former is self-explanatory and the latter, a government-funded policy body, was recently torn apart by political interference, culminating in the sacking of Professor David Nutt. And yes, there is significant political interest in controlling the debate on internet regulation, just as there is for drugs policy.
But a gap needs plugging. Research budgets are under severe pressure and, in the UK especially, digital rights organisations are woefully underfunded. They couldn't for example afford to fight a high profile court case against a large corporation or government department, as equivalent US-based organisations such as the EFF or ACLU have done on many occasions.
And the situation in the UK may be about to get even worse with the announced closure of statutory consumer body Consumer Focus, who in the past has fought some important privacy and data protection cases against major ISPs on behalf of consumers.
I see the answer in a digital council funded wholly or in part by global technology companies, but with a mandate and structure that helps to eliminate interference by its backers. Easier said than done, you might think. But with privacy and digital rights high on many corporate agendas, funding the enemy may be seen as the least-worst option, given the noises emanating from national governments, and in particular, the hard-line coming from the European Commission.