Andrew Wade, senior reporter
A couple of weeks ago I wrote about my visit to Founders Factory, a London incubator that’s home to a gaggle of exciting startups. Iris, an AI-based scientific research assistant, was one. illumr, a company that uses AI to create 3D visual data patterns, was another.
Jason Lee, illumr’s founder and CEO, set up one of the UK’s first ever internet companies way back when. His current venture, which is based on technology that came out of Birmingham University, is part of the coming wave of AI startups hoping to change how we interact with the world. At its core, illumr is essentially a data analytics platform. Its secret sauce, if you will, is how it presents that data to its users, and the ensuing insight that its visualisations enable.
Humans can easily recognise patterns such as a flock of birds (Credit: illumr)
Humans have evolved over millennia to recognise patterns. Our ability to do so underpins our understanding of everything around us, from pictures and the written word, to oncoming traffic when we’re crossing the street. Computers, on the other hand, are number crunchers. Feed them data and they can perform calculations in milliseconds that would quite simply be beyond mere mortals. By combining today’s computational power with our inherent ability to see patterns, illumr aims to harness their combined potential.
By presenting data sets in navigable 3D spaces, illumr allows users to identify patterns and correlations and explore their meaning. It calls this process ‘augmented cognition’, and it has potential to deliver insight across a whole range of areas, from identifying fraudulent transactions, to new drug discovery. One of illumr’s earliest tests came courtesy of the Home Office and MOD, when the technology was used to help counter terrorist networks.
“They link all their military systems together, but they also have non-traditional data sources being brought to the mix,” Lee explained to me.
“We looked at social media, we looked at Twitter feeds, we looked at English Defence League, and we were able to reveal some interesting insights about demonstrations, a particular demonstration up in Birmingham. We were able to get some rules that could predict sentiment, and also we were able to refine the ability for the rules to predict behaviour, and in so doing, lowered the false positives by 66 per cent in one instance.”
Fighting terrorism, combatting fraud, and advancing medical science are clearly all examples where AI and algorithms can benefit us. However, there is also undoubtedly a darker side to data, in particular how some of the world’s tech giants are using it. At this stage, most of us are probably used to Google and Facebook advertising based on our search and browsing habits. It can be intrusive, and occasionally even a little creepy, but it’s also understandable to a degree, and relatively harmless.
But the level of insight these companies now have into our lives goes well beyond websites we visit or our online purchases. As the saying goes, if you can’t figure out what the product is, then the product is you. The trade-off for free search, email and social media is the mountains of data these companies harvest from our online activity. Today these companies know more about us than many of our friends and family. They can not only analyse how we feel, there is evidence that they can also influence those feelings.
It was reported earlier this week that Facebook told advertisers in Australia that it was able to identify teens who were feeling ‘insecure’ and ‘worthless’. After initially apologising, the company then issued a statement saying it does not “offer tools to target people based on their emotional state”. But if this is the case, why were advertisers being pitched this capability? When the power of these technology giants has reached a point where our moods can be monitored, manipulated and potentially monetised, we have surely crossed into the realms of dystopia.
Despite the protestations of one former Facebook executive, data does not ‘sometimes behave unethically’. Data does not have agency. People and companies have agency, and can act unethically based on insights from data. As analytics tools become ever more sophisticated, we must be careful they are used responsibly. Algorithms and AI should be there to enhance, rather than exploit, humanity.
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...