Monday, 21 September 2015

To Succeed in Cloud Computing, We Must Revisit Our Own History

The use of cloud computing is growing and this growth will increase in the coming years. The possibilities and the limitless capacity of cloud applications has been so enormous that immense ecosystems have developed around it. The key thrust of cloud computing is common or share infrastructure as popularly known, something that has the big bang effect of changing the information technology as we know it.

For years, great innovators have often figured out how to make our lives less laborious. Let’s leap back in time for a moment and look at a comparable revolutionary event that changed the world as we see it today.

Back when there was no artificially harnessed electricity (at least available on a mass production), one innovator named Thomas Edison came up with a way to generate current and how to transmit it in a technique called Direct Current, abbreviated DC. This worked but it was tremendously wasteful and required millions of stations to power cities of the world. Other inventors in the field, including Tesla, worked on a substitute, AC or Alternating Current and this was proved to be more efficient than Direct Current in terms of mass production.

What emerged at this point was a significant discovery in history that proves history repeats itself, the need to create utility stations that acted as super nodes in a countrywide grid and that relayed electricity throughout the grid and made it possible for electricity to be always available when people needed it. This groundbreaking concept overturned the plans that were in place at that time and gave way to today’s smart power grids that provide us with access to power at any time at the flip of a switch.

Looking at today’s computing, we are operating more or less in the Direct Current era, where everything has to be supplied to individual needs separately prompting for wastage – think of the many software installed in your laptop, for example MS Office. This in turn forces you to spend more money on Windows compatible software, and so on, yet this products are not custom made for you but rather off the shelf products that are similar in nature for as long as you are comparing a similar version. It is therefore very unproductive to struggle installing the software when you can easily use it online when you need it.

What cloud computing does is that it makes all these products available on the Internet or a shared network and the new economy that emerges is one where there are no longer any legacy. I can easily use any Operating System I would like with any software I need without any compatibility challenges because all of these are made available over the Internet (think of Windows 7, 8, 10, Linux etc).

Ultimately, we may reach a day when we won’t need Operating Systems anymore and all we will need is simple base software installed in our laptops, computers etc that connects us to the Internet and to all our applications as we may need them (on demand) without clattering our limited storage space. Such efficiency can only be made possible by ensuring that today our internet connectivity is always on, always available and always reliable.


This shift has seen massive investments going into data centers as more and more ICT infrastructure moves to the cloud (Internet). The future we shall see is one where data centers shall form smart grids and relay information much the same way that electricity is relayed today. Data centres use a cloud foundation to virtually connect companies to the data that supports their business, removing physical or geographical barriers. The foundation for the cloud concept is the lessons we learnt in our own history of electric grid system‎ that has always improved over the years including connecting to the various power sources seamlessly.

Wednesday, 3 September 2014

Way out to real-time data in today’s Geo-Social media, is a universal open database for places and people

Today’s modern society has become so fast that data users must be extremely cautious on relevance and authenticity. Take for instance tracking movement of a person from one country to another during the sad era of Ebola. It would be unwise to ignore social media for critical information about the last update the person did and the location registered by the social app. But social media alone cannot offer the much needed information because it isn’t based on real-time localized information.

We have seen substantial growth of start-ups building apps for all type of uses. This effort combined with the power of the ever improving mobile device creates an environment of limitless possibilities. While one media might not be completely efficient in offering real time data, integration of several platforms will offer almost real-time results that can be crucial. But building such systems is complex because of the various scopes and contextual challenges such as social and location (geographical) issues.

Let's emphasize on location. One challenge is getting accurate real-time information around locations. Creeping the web in real-time is difficult, especially if you’re exploring millions of chunks of data. It requires a distributed system that is complex. Another ultimate challenge is the process of taking a location information and figuring out what actual locations it mentions or what locations it’s about. That requires developers to linguistically understand an entity and what is a pronoun and what isn’t a pronoun. Finally is to build a “places” taxonomy that can match against queries. This bit is more useful when humans contribute directly to feeding the system with the accurate information. Deliberate in-direct participation of people in developing this data is through workplace systems that capture locations and ultimately this data is safely shared (without sharing any private bio data that breach privacy).

The initiative by Google and it’s partners to create a database of open places could solve the mystery of “places” taxonomy. At least if we all make a bold move in building systems that contribute and utilize central global open database where names and places are accurately spelt and updated.

The issue of location and geo-tagging will ultimately get a social touch when we connect and contribute to the data. Unfortunately, system owners (organizations) are afraid of imaginary threats when they are approached to consume open data. Imaginary because there is no risk in standardization. This is a big deal because it breaks consistency when for instance my software system is meant to shorten Nairobi to NRB while the Airport system is shortening the same to NBI. Same applies to names of people where I can chose to be Allan or Alan and no other variant otherwise it's a spelling error. Communication is broken down in this context.

Solution lies in use of an open database where edits made by those who notice them are vetted for accuracy by the end users. This is also important in other real-time data such as change of business location demolition of a land mark and so forth.