It’s no secret that artificial intelligence (or AI for short) is slowly but surely gaining momentum—up to the point where it seems that it is now mainstream.
For years we’ve all heard about machine learning, deep learning and neural networks but only recently they all started to be covered by media—and as a result, more people are interested in it and more importantly, more developers are starting to work with AI.
Yet, as more people are interested and AI (and quite frankly, are interacting with AI every day, even without knowing), there are some issues that arise when it comes to how data is collected, transmitted and interpreted.
For one, an AI—much like a human being—needs a lot of information to get better. But if for a human, collecting information is cumbersome and doesn’t usually pose any threats, for an AI data collection is incredible—it’s fast and it’s sources are extremely varied. For example, an AI that is allowed to gather whatever information it deems necessary from the Internet without any constant guidance is bound to end up very bad, mean and eventually, dangerous. The famous case of Microsoft’s Tay comes to mind-they had to shut it down after only 16 hours, because “she” was posting inflammatory and offensive tweets on Twitter.
Then there’s Google’s DeepMind (a $500 million dollar acquisition nonetheless), which is by some standards, the most advanced AI in existence today. Not only DM gathers and interprets data differently from anything we’ve seen before, beating humans in games where we used to excel, but it does so in a manner that baffles even its creators—by rewriting itself and creating its own encryption system. And if this even wasn’t enough, during a test “Google’s AI got “highly aggressive” when competition got stressful in a fruit-picking game”.
On the other spectrum, we have Elon Musk. He famously founded and funded Open.AI which “aims to to carefully promote and develop friendly AI in such a way as to benefit humanity as a whole”. And he also founded Neuralink, a neurotechnology company that develops the neural lace, an implantable human-computer interface and aims to merge human intelligence with artificial intelligence—I call this mixed or hybrid intelligence.
Now what does all this have to do with data and more specifically, with data exchange? And does it mean for your company, that relies on data?
Well, first of all, AI is fundamentally changing the way we collect and use data. In some cases as seen above, ways that we don’t even fully understand. And when this data comes from different sources in different formats, how are you going to unify it?
And second—and I believe this to be perhaps the most important aspect—how we humans and AIs gather and exchange data will be paramount to the success of both “species”. Because we naturally speak different “languages” and even though we create AI that can interact with us, how we process information and how we unify its meaning will determine the success or failure of each of the parties.
And in that sense, companies that know and are prepared for the shift in how data is collected, exchanged and interpreted will get the upper hand in the very near future.
We at Gloobus are preparing for this shift with the GSB (Gloobus Service Bus), that is capable of gathering, exchanging and unifying data in a single format from any source, in real time.