With the rise of technology, we are constantly building upon research and development of Artificial Intelligence, otherwise known as AI. AI is used more frequently than we realise and some of the newest technologies are helping us in everyday life.
Google recently held a demonstration showcasing their state-of-the-art product – Google Duplex. It’s the latest feature which adds on to Google Assistant. Essentially Google Duplex is a completely automated system that can intelligently (and usefully) make calls on your behalf (I know, cool right!). Although they haven’t programmed it to be able to negotiate a job offer (or get you out of the dog house with your other half), what it can do is book your hair appointment, call your doctor’s surgery to book you in, and make dinner reservations on your behalf (note - this MAY get you out of the dog house). AI’s ability to gain intelligence has been great here. It can understand a variety of accents, fast speech and complex sentences. The potential of this seems endless I can’t wait to see how this develops (if you don’t, you probably shouldn’t be reading).
AI has come along in incredible leaps and bounds within the last year. We now have AI driven robot’s greeting you at reception and walking you to your meeting room, we have AI driven supermarkets being rolled out by Amazon, Autonomous vehicles, you name it!Now although it’s great (really great), they are some slightly shadowing sides. Industry leaders such as Bill Gates and Elon Musk have both (during conferences) highlighted the risks of AI and they are fairly significant. Elon Musk recently made a comment to say ‘AI is more dangerous than nuclear weapons’. He went on later to say "I'm really quite close, very close to the cutting edge in AI. It scares the hell out of me,". "It's capable of vastly more than almost anyone on Earth, and the rate of improvement is exponential." Elon made a statement to say that he predicts Tesla’s autopilot 2.0 “will be at least 100-200% safer than human drivers within 2 years” and that humans will eventually be able to sleep at the wheel (that’s dreamy, isn’t it?).
Musk believes that to inhibit the risk of AI being a severe danger to humanity, it needs to be regulated. I guess the opposing threat IS pretty obvious. He outlines a correlation between AI advancing versus its ability to threaten humanity. AI has the distinct ability to advance to a level that it would be able to outmanoeuvre the human mind. Musk’s contingency plan to protect humanity should this threat to occur, is [insert drum role here] SpaceX. The mission to land and home 1 million people on Mars.
Although this sounds great (and definitely a huge step for mankind), if AI has noticeable intelligence over humans now, will they find it hard to make their way to Mars? I think not…
This is a largely debated subject, so feel free to leave your comments below. Thank you for reading.