Microsoft vs. Google: The Tortoise and the Hare of the Tech World?

Two bubbles. Photo by Marc Sendra Martorell/ Unsplash.

Although over a decade in the making, the race between Bing and Google Search has only just begun. As Microsoft incorporates ChatGPT into search, and Google unveils its own chatbot competitor ,“Bard,” only to fall at the first obstacle, will “slow and steady” win the race after all?

by Lauren Richards

February 12, 2023

For 14 long years since the launch of its silver-medal search engine, Bing, Microsoft has been wishing on every star in the night sky to find a way of connecting with the global netizens of the online world in the same way that its biggest rival, Google, seems to do so easily.

But despite many “Cortana” and “Outlook”-shaped attempts to catch attention and win over internet surfers and searchers once and for all, the bottom line remains the same: Microsoft’s slogan of “Bing and Decide” just doesn’t quite have the same ring as Google’s “I’m Feeling Lucky.”

However, just as the earth’s core seems to be spinning in the opposite direction, a similarly unexpected tide seems to be turning in the battle of the search engine giants; Microsoft’s prospects are looking up, and Google definitely isn’t feeling so lucky anymore. 

What has caused such a stir you might ask? Well the answer to this question, as you probably know, (and if not can undoubtedly guess) is ironically the one with all the answers itself: ChatGPT

“What is ChatGPT?” isn’t a question we really need to answer at this point, the real question is “What isn’t ChatGPT?”

ChatGPT should really be called ChameleonGPT

ChatGPT, the automated chatbot created by artificial intelligence (AI) research laboratory, OpenAI, is the fastest-growing web app ever released, overtaking both Facebook and Google, and in its simplest form is AI technology designed to respond to user queries with conversational, human-like written answers.

And sure enough, as its name implies, the chatbot can hold up a very polite, helpful and diplomatic conversation, so much so that many are now accusing ChatGPT of excessive “wokeness” as a result of being programmed with left-wing bias.

But more than this, as ChatGPT’s utility has spread across all professional sectors like wildfire, it has also evolved the bacterial-like quality of multiplying its capabilities exponentially, every single day it seems. 

Computer scientists are using ChatGPT to help write code. Teachers and students are using it to aid learning. But it doesn’t just stop there, research scientistslawyersjournalistsdoctorscriminalsdating appsinvestorsfashion designers and even the Pentagon – you name it, everyone is using ChatGPT for anything from writing a valentine’s card message to making a scam sound more convincing. 

ChatGPT has even been shown to be capable of passing university medical exams!

It’s no surprise therefore, that since its release late last year, ChatGPT has caused a furore of excitement and chaos amongst the global tech community who’ve reacted with an equal amount of both awe and anxiety. 

1 Cup Before Breakfast Chews Up Heavy Fat

Truth be told, there aren’t many things in this world that transcend the many factions of society in quite the same way.

But aside from ChatGPT’s many use cases, as well as simply being a thing of technological beauty, its most profound trait is of course its ability to provide definitive answers to nuanced questions – in seconds – without pages of links to trawl through. 

It is without a doubt the symptom-googler, insurance-price-comparer, and cheap-flight-seeker’s wildest dream, and now poses a considerable existential threat to a myriad of websites that provide advice and guidance, as well as most notably, the traditional search engine.

Like a shot from a starting pistol, this fierce competition jolted Google from its complacent slumber late last year, seemingly striking CEO Sundar Pichai with a lightning bolt of motivation to catch up with OpenAI Founder and CEO, Sam Altman. Pichai reacted by immediately issuing a “code red” for Google to redistribute resources and fast-track the company’s own AI programs for release, some of which had been years in the making.

And surprise surprise, the scrambling culminated this week with Google unveiling their own chatbot, “Bard,” and confirming the company’s expected approach of opting to – for better or worse – go head-to-head with OpenAI and ChatGPT. 

Google titled Bard’s press release, “An important next step on our AI journey,” but in reality, we all know what they really meant was: “An existentially essential step-up in our AI journey.”

“Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses. Bard can be an outlet for creativity, and a launchpad for curiosity,” said Pichai.

But what Google perhaps did not anticipate, or at least did not expect to be so successful, was Microsoft’s own ace-of-spades chatbot move in the opposite direction.

After too long spent in Google’s ever-growing, shapeshifting shadow, Einstein’s parable of insanity – “The definition of insanity is doing the same thing over and over and expecting different results” – seems to have finally struck a chord with Microsoft.

Because instead of trying to fight the inevitably losing battle of competing with OpenAI – a fight they’ve lost many times before against Google – Microsoft somewhat humbly chose to change tact and join forces instead, coming at Google from a different angle in partnering their search engine, Bing, with the opposition, ChatGPT. 

The AI arms race

After a late January announcement that they would be investing $10 billion in OpenAI to “extend” their “long-term partnership,” earlier this week Microsoft announced the release of a new version of Bing that has ChatGPT technology incorporated into its search function.

“There are 10 billion search queries a day, but we estimate half of them go unanswered,” saidMicrosoft, going on to frame the purpose of the new search engine chat feature by saying, “We think of these tools as an AI copilot for the web.”

“With the new Bing, exploring the web isn’t just easier, it’s also more fun,” says Microsoft.

By collaborating with OpenAI this way, Microsoft may have finally created a monster that can rival Google’s almighty search engine and its new Bard, one with the sophisticated skeleton and DNA of ChatGPT, but wrapped with the familiar skin of Bing.

“We formed our partnership with OpenAI around a shared ambition to responsibly advance cutting-edge AI research and democratize AI as a new technology platform,” said Microsoft CEO, Satya Nadella.

“Microsoft shares our values and we are excited to continue our independent research and work toward creating advanced AI that benefits everyone,” echoed Sam Altman, CEO of OpenAI.

But before any kind of hotly-anticipated duel between Google and Microsoft’s chatbot-driven search engines was even able to take place, Google seems to have embarrassingly, and very expensively, fallen at the first hurdle. 

Google’s hubris

In the promotional video released by the company to introduce Bard to the world, Google featured a short clip of their chatbot aiding a search query from a parent asking what new discoveries from the James Webb Telescope could they share with their nine-year-old. 

The video went on to show Bard quickly providing three short, simple facts about what the telescope has seen.

All was well, the promo video was well-received, and Google’s rush to release their chatbot competitor had seemingly paid off; the excitement brewing for a stand-off between Bard and the Bing x ChatGPT coalition was nothing less than palpable. 

Unfortunately however, no more than 72 hours after launch, Bard’s opening lap suffered a brutal anti-climax when astronomers saw the promo video on Twitter and quickly pointed out that one of the three “facts” the chatbot provided was in fact completely inaccurate.

The scientists rightly stated that the James Webb telescope did not in fact take the first pictures of a planet outside the solar system as Bard had stated, these images were instead captured by the “European Very Large Telescope” in 2004. 

After witnessing the onslaught of criticism faced by ChatGPT regarding its potential to spout falsities, bias and misinformation, the world is well-prepared at this point to accept the fact that new AI technologies – especially chatbots – are a work in progress that will almost certainly make mistakes as they learn and grow. 

But this particular mistake from Google is colossal in a different way. Not just because it insinuates that the company’s rush to develop Bard may have in turn made it prone to errors from the get-go, but also that the company’s online launch of the chatbot was so fraught that Google didn’t even fact-check the promo video. 

As a result, many are now questioning the company’s grip upon the field it played an integral part in creating. 

Free Domain Privacy

To say that this error was costly to the company would be a laughable understatement. In truth, so much doubt has been cast on Bard’s, and by default Google’s capabilities that the company’s shares have plummeted, losing $100 billion in the process

Heads are surely rolling within Google’s marketing department.

With this David and Goliath-esque plot twist, one cannot help but remark on the apparent hubris faced by the hegemon of the internet as over-confidence seems to have blinded it to the threat of the underdog, as well as its own operational oversight.

The company has tried to stifle the flames from their chatbot’s car crash with reassurance, stating that this error simply highlights “the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester programme.” But the cracks in Google’s once-concrete facade are undeniable, and Microsoft is no doubt feeling smug that their humility in seeking companionship, rather than contest with OpenAI has paid off. 

It must be mentioned however, that Microsoft’s move to not develop a new chatbot did perhaps benefit somewhat from retrospect, as one of their previously failed chatbots, “Tay,” caused a stir back in 2016 when it was found to have produced racist and sexist tweets. Nevertheless, the company has still clearly learnt its lesson from that mistake, and at least it wasn’t in the promo video. 

“It’s a new day in search,” says Microsoft CEO, Satya Nadella, at the new Bing x ChatGPT launch event, “a race starts today… and we’re going to move, we’re going to move fast.”

Bard vs. Bing’s ChatGPT: How do the chatbots compare?

Both Microsoft and Google have denoted their interest in chatbots as a result of what Nadella has called a “paradigm shift” that’s occurring in the world of online searching.

In the past, people asked simpler questions that required simpler answers. But as time goes by, queries are becoming much more nuanced, and in many cases have many more than just one answer. As such, a page of links is not very helpful in tackling the evolution of questions. 

In a statement to mark the release of Bard, Google’s CEO stated:

“When people think of Google, they often think of turning to us for quick factual answers, like ‘how many keys does a piano have?’ But increasingly, people are turning to Google for deeper insights and understanding — like, ‘is the piano or guitar easier to learn, and how much practice does each need?’” explaining that “AI can be helpful in these moments, synthesizing insights for questions where there’s no one right answer.” 

That’s where a chatbot’s USP comes in: bestowed with the power of a large language model trained on billions of words taken from the vastness of the internet, Bard and ChatGPT are able to harness the might of their neural network infrastructure – inspired by the neuronal connections and cellular structure of the human brain – and formulate a simple human-like response to a complex query in seconds. 

As Pichai explains in Bard’s press release, chatbots can help “distil complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web.”

Bard is based on Google’s large language model called LaMDA (Language Model for Dialogue Applications), in the same way that ChatGPT is based on OpenAI’s own language model, GPT3. 

In the Photo: Large Language Models: “The streams of data which Large Language models produce can be thought of as ‘contours’. Strings of information flowing around the content they are trained on, responding to the tasks they are asked to perform. As with the previous concept, these forms can respond dynamically to objects which represent the issues within technology,” says Artist: Tim West. Photo Credit: Tim West/Google DeepMind/Unsplash

But the thing is, because these chatbots are trained on data taken from the internet, they also inherit the many factual errors, bias and misinformation that’s littered throughout the online world too. 

This explains how Google’s Bard was able to make such a mistake with the James Webb Telescope trivia; somewhere on the internet, there is a report, article or blog post (most likely many) that state this error as a fact. Bard was simply sourcing, collating and packaging this information into a response that it deems to be helpful to the user: It has no ability to fact-check or reason between what’s true or false. 

Despite the enormous losses Google has faced in the past few days as a result – both economically and reputationally – it must be said that Bard is still in the testing phase, with only a select panel of experts having access to its pilot at present; it’s still largely very much a work in progress. 

The chatbot is however expected to be rolled out to the public in a few weeks in a “lightweight” low-compute format that will allow it to be scaled to many users at once; a noteworthy advantage over ChatGPT which regularly experiences glitches or refuses to compute due to an overload in capacity. 

There’s still quite a lot of fog around the datasets used to train Bard though, and in the LaMDA research paper released by Google last year, they cite that 12.5% of the model’s training dataset was taken directly from Wikipedia, an entirely unregulated site which we all know to be a largely unreliable source of factual information.

What’s more, given that the neural networks chatbots are built upon are deterministic in nature, the answers they generate are by and large different every time, begging the question: How is this expected lack of consistency going to impact a search engine’s ability to produce replicable and reliable answers to user queries?

We will probably need to wait for the full roll-out of Bard to know exactly the breadth, scope and accuracy of the chatbot’s knowledge, but if fully integrated into Google Search, as has been alluded to, then imagine how many people will be using Bard given that, in 2022, there were 8.5 billion searches on Google per day?

ChatGPT’s user base of 100 million monthly users somewhat pales in comparison.

Scope of knowledge is actually also one of the major limitations of ChatGPT, as its sparse post-2021 knowledge means that any question dependent on up-to-date information is likely to yield an unreliable answer, if not no answer at all. 

The version of ChatGPT incorporated into Bing, however, does not share this flaw, as the version integrated into Microsoft’s upgraded search engine has world knowledge that dates up to the present day. 

Another important consideration here, is how this chatbot revolution is going to affect the many companies that depend on people clicking their links to maintain an online and market presence, and more importantly, how this will affect their ability to generate the much-needed revenue their ads are responsible for.

At present, it’s too early in the game to give the answer to this, least of all, “one true answer.”

However, Bing, for example, does have ads incorporated into the chat function of its search at present, but whether this will remain intact as a successful method of lucrative ad-clicking, and whether Google will follow suit, is yet to be determined.

Big AI isn’t necessarily safe or sustainable

There’s a lot of noise surrounding AI chatbot technologies at present, as well as, ironically, a lot of unanswered questions. 

There are those that say chatbots are overhyped, others who fear being replaced by them, and some that have even outright banned their use completely; and that’s before we even begin to unpack the myriad of ways they truly threaten democracyprivacy and wellbeingin their propensity to spread misinformation and bias.

Chatbots have widely been shown capable of simply making stuff up, creating malware and harmful content through “jailbreaking,” and costing millions of dollars to train. 

In addition to this, they’ve also been accused of infringing on copyright and regulatory legislation, as well as posing a significant risk to sustainability efforts as the high computational demand of the AI models they run on require a lot of electricity and water to power and keep cool. 

“Big AI really isn’t sustainable,” says Dr Andrew Rogoyski, Director of the Institute for People-Centred AI at the University of Surrey.

But despite the evergrowing list of ways that these chatbots are seemingly able to manifest harm, the public’s ardour for their artificially intelligent Q&A (and broader) capabilities is still boundless. 

As a result, senior EU officials have expressed concern, with EU industry chief Thierry Breton stating last week that a “solid regulatory framework” would soon be introduced to govern AI systems. 

Regulations that will soon be increasingly necessary given that China is also rumoured to be poised to drop its own chatbot in the coming weeks, raising serious concerns with regard to censorship, propaganda and privacy. How far can the imposed agenda and systematic disinformation of an autocratic government reach if given free reign over sophisticated AI infrastructure?

Microsoft Founder, multi-billionaire and philanthropist, Bill Gatessays Bard, ChatGPT and other chatbots alike “will change our world.” But the recent developments still unfolding beg the question: hasn’t it already been changed?

The new AI era

According to Moore’s law, developed in 1965 by engineer and founder of Intel Corporation, Gordon Moore, the capacity of a computer should double roughly every two years. 

But in today’s accelerating world that’s full to the brim with technological advancements like machine learning, we’re exceeding this rate significantly, with AI now anticipated to be doubling in complexity around every six months. 

But is this change for the greater good, or not? And to what extent should we really fear chatbot technology, as opposed to being excited by the rocket fuel its capabilities could bring to drive progress, innovation and global solutions?

Microsoft has expressed motivation to use the technology to “empower people to unlock the joy of discovery, feel the wonder of creation and better harness the world’s knowledge,” all the while preserving human values.

OpenAIalong with “shaping the future of technology” in a way that is “aligned with human values and follow[s] human intent” claims to be on a mission to “ensure that artificial general intelligence benefits all of humanity” with their research. 

And Google, the current gateway to the internet that houses AI chatbot technology in its entirety, has similarly stated their intentions in developing Bard “to be centred around helping people, businesses and communities unlock their potential,” opening “new opportunities that could significantly improve billions of lives.”

Google was in fact one of the first companies to develop and publish a set of guiding principles for the best practices of AI. 

But the question is, despite these claimed good intentions – and regardless of which tech giant comes out on top – will the global community be able to safely navigate the new landscape that’s mapped out as they redraw the global terrain with their AI revolution? 

And if so, will it be in a way that fosters collaboration, prosperity and evolution, or will the turbulence caused by such rapid change derail the planet entirely? 

Not to mention that millions, if not billions of people might find themselves “enslaved” by AI disinformation without even knowing it if autocratic governments fail to exercise restraint in using chatbot technology to shape the minds of their people.

Like with anything new, we’ll just have to wait and see. 

Subscribe to our newsletter.

This article was originally published on IMPAKTER. Read the original article.

0 Shares