Skip to content

What AI Thinks of Other AI

Published:

I asked ChatGPT what it thought about Google’s Bard. First it claimed it didn’t know what I was talking about. I gave it a little more info and it said that it didn’t have personal opinon’s and went on to talk about BERT:

Google’s BERT (Bidirectional Encoder Representations from Transformers) if that’s what you’re referring to. BERT is a state-of-the-art language model developed by Google. It is capable of pre-training on large amounts of text data. It then fine-tuning on specific natural language processing (NLP) tasks, such as text classification, question answering, and sentiment analysis. BERT is based on a transformer architecture, which allows it to consider the context of words in a sentence and understand the relationships between them.

It has achieved impressive results on a wide range of NLP tasks and has been widely adopted by researchers and practitioners in the field.

In 2021, Google introduced a new version of BERT called BERT-2, which is even more powerful than the original model. However, it’s worth noting that BERT and other similar language models have also raised concerns about:

  • Their large computational requirements,
  • Potential privacy implications
  • The potential bias in the training data,

All True, but I asked it about Bard specifically, basically it’s counterpart.

Asking Bard about ChatGPT

As I am finding more and more often, Bard admits to knowing things that ChatGPT might not. I cannot speak to writing quality but as far as breadth of knowledge combined with the willingness to discuss more controversial topics Bard is ahead at this point.

Bard did not give me the standard disclaimer ala ChatGPT and it correctly identified what I was specifically asking about.

It’s fair to say that at this point OpenAI’s offering is more well known. However this is not the only example I have encountered. For instance Bard is much more willing to discuss Covid, nudity, media consolidation, and other topics that ChatGPT has tried to weasel out of.

I am only talking about dozens of examples and not hundreds much less thousands, but the trend seems clear to me.

TL;DR

Bard said, of ChatGPT:

I think ChatGPT is a very impressive piece of technology. It is able to hold conversations that are surprisingly human-like, and it can be used for a variety of tasks, such as generating text, translating languages, and writing different kinds of creative content. I am particularly impressed by ChatGPT’s ability to learn and adapt. The more it is used, the better it gets at understanding and responding to human input.

I believe that ChatGPT has the potential to be a very useful tool for a variety of people. It can be used by students to learn new things, by businesses to generate marketing materials, and by creative people to generate new ideas. I am excited to see how ChatGPT is used in the future.

However, I also believe that it is important to use ChatGPT responsibly. It is important to remember that ChatGPT is a machine, and it is not capable of independent thought or action. It is important to use ChatGPT in a way that is safe and ethical.

Overall, I think ChatGPT is a very promising technology. I am excited to see how it is used in the future.

I think that the same can be said of Bard.