Jump to content
Bullnose Forums

Have You Played With Microsoft Copilot?


Recommended Posts

We shouldn't be surprised with any of that. I've not had Edge take over my Chrome sessions or searches, but have had plenty of times when it asked me to make Edge my preferred browser. As for it making up the time, I like the statement of "and sometimes factuality comes second". And if you are playing with a giant child in the sky why would you expect it to keep your thoughts private? :nabble_smiley_cry:

None of it surprises me but there seem to be an awful lot of people who should know better that believe it.

Cohen is a fine example.

As a lawyer who is looking at going to jail would you rely on some unsubstantiated babble from a chatbot and put it in front of a judge? :nabble_anim_crazy:

Link to comment
Share on other sites

  • Replies 213
  • Created
  • Last Reply

Top Posters In This Topic

None of it surprises me but there seem to be an awful lot of people who should know better that believe it.

Cohen is a fine example.

As a lawyer who is looking at going to jail would you rely on some unsubstantiated babble from a chatbot and put it in front of a judge? :nabble_anim_crazy:

The vast majority of people don't know that it is unsubstantiated babble. And therein lies the problem. I assumed that it wouldn't lie, but it absolutely does. And I assumed it would follow explicit directions, and it absolutely refuses to do so.

It really needs to print a disclaimer that says "Everything I tell you is suspect as I make things up and cannot be bothered to follow your directions." But people wouldn't read that, so why bother.

Link to comment
Share on other sites

The vast majority of people don't know that it is unsubstantiated babble. And therein lies the problem. I assumed that it wouldn't lie, but it absolutely does. And I assumed it would follow explicit directions, and it absolutely refuses to do so.

It really needs to print a disclaimer that says "Everything I tell you is suspect as I make things up and cannot be bothered to follow your directions." But people wouldn't read that, so why bother.

IOW, the VAST majority of people are complete idiots.

Link to comment
Share on other sites

IOW, the VAST majority of people are complete idiots.

I would say "uninformed". All of the hype in the news is about how it can produce fakes. As you've found there are writings out there that prove that it lies and won't follow directions, but you have to go looking for them. So unless you've played with it and caught it in the lies and/or not following directions you won't have been informed.

Having said that, there sure are a bunch of idiots out there. :nabble_smiley_wink:

 

Link to comment
Share on other sites

I would say "uninformed". All of the hype in the news is about how it can produce fakes. As you've found there are writings out there that prove that it lies and won't follow directions, but you have to go looking for them. So unless you've played with it and caught it in the lies and/or not following directions you won't have been informed.

Having said that, there sure are a bunch of idiots out there. :nabble_smiley_wink:

Maybe I'm just jaded and cynical?

Maybe it's the autism?

There's NO WAY I would ever put my fate in the hands of AI as it stands now.

Link to comment
Share on other sites

  • 2 weeks later...

Maybe I'm just jaded and cynical?

Maybe it's the autism?

There's NO WAY I would ever put my fate in the hands of AI as it stands now.

https://arstechnica.com/information-technology/2024/02/amnesia-begone-soon-chatgpt-will-remember-what-you-tell-it-between-sessions/

Gary, you were saying there was no persistence of "thought" with Microsoft's chatbot.

Link to comment
Share on other sites

https://arstechnica.com/information-technology/2024/02/amnesia-begone-soon-chatgpt-will-remember-what-you-tell-it-between-sessions/

Gary, you were saying there was no persistence of "thought" with Microsoft's chatbot.

That proves what I experienced was real, not just perceived. And each chat has a finite limit, so just when you are "getting there" you have to quit and start over.

But while having it remember what it learns from chatting with me would help, that means it doesn't know what it has learned when you ask a question. So somehow it needs to be able to connect things. And that gets scary! :nabble_smiley_oh:

Link to comment
Share on other sites

That proves what I experienced was real, not just perceived. And each chat has a finite limit, so just when you are "getting there" you have to quit and start over.

But while having it remember what it learns from chatting with me would help, that means it doesn't know what it has learned when you ask a question. So somehow it needs to be able to connect things. And that gets scary! :nabble_smiley_oh:

All it takes is a neural network that constantly replenishes itself, and circuits get programmed like instincts in animals.

Musk's neuralink is scary indeed!

But when it's developed we'll find out if this is the matrix, or minority report.

Huxley was more prescient than satirical. 🤯

Just like Judd was with Idiocracy!

Link to comment
Share on other sites

All it takes is a neural network that constantly replenishes itself, and circuits get programmed like instincts in animals.

Musk's neuralink is scary indeed!

But when it's developed we'll find out if this is the matrix, or minority report.

Huxley was more prescient than satirical. 🤯

Just like Judd was with Idiocracy!

https://arstechnica.com/information-technology/2024/02/google-upstages-itself-with-gemini-1-5-ai-launch-one-week-after-ultra-1-0/

Link to comment
Share on other sites

"It's impressive to process documents that large, but the model, like every large language model, is highly likely to confabulate interpretations across large contexts. We wouldn't trust it to soundly analyze 1 million tokens without mistakes, so that's putting a lot of faith into poorly understood LLM hands."

Link to comment
Share on other sites


×
×
  • Create New...