Jump to content
Bullnose Forums

Have You Played With Microsoft Copilot?


Recommended Posts

Q1: Tell me about a Bullnose truck named Big Brother.

A1: Copilot gave me a speech about Orwell's 1984 novel, and admitted that it didn't find any data about a Bullnose truck named Big Brother.

Q2: You should look here: https://forum.garysgaragemahal.com/Full-restoration-84-Crew-Cab-4x4-Long-Bed-tp105177.html

A2:

Q3: I thought that the owner name is Jeff.

A3:

Q4: Good! From now, if I ask you to tell me about a Bullnose Ford truck named Big Brother, I suppose you will find it? Because earlier you didn't...

A4:

I am a bit surprised... No possible training?

«I learn but I forget, not

enough memory».

:nabble_anim_confused:

Yup. Same thing I discovered, although in my case he didn't tell me that. I had to find that out on my own, but it is true. :nabble_smiley_cry:

Link to comment
Share on other sites

  • Replies 213
  • Created
  • Last Reply

Top Posters In This Topic

Yup. Same thing I discovered, although in my case he didn't tell me that. I had to find that out on my own, but it is true. :nabble_smiley_cry:

For some reason I tried one more time to use Bing/Copilot. Was watching the national championship and had time, so why not.

I gave it a link to my Word document called Big Blue's Spec Card, the one I print and put with the truck at shows. And I asked if it could access it. It said it could and then proceeded to give me information from a completely different document.

I proved that I'd given it the right link as I put the link in a browser tab and the document opened up. So I provided some feedback to the Copilot team explaining what had happened and that I was through playing with their toy. When, if ever, it is capable of following simple instructions to contact me.

:nabble_face-with-open-mouth-vomiting-23x23_orig:

Link to comment
Share on other sites

For some reason I tried one more time to use Bing/Copilot. Was watching the national championship and had time, so why not.

I gave it a link to my Word document called Big Blue's Spec Card, the one I print and put with the truck at shows. And I asked if it could access it. It said it could and then proceeded to give me information from a completely different document.

I proved that I'd given it the right link as I put the link in a browser tab and the document opened up. So I provided some feedback to the Copilot team explaining what had happened and that I was through playing with their toy. When, if ever, it is capable of following simple instructions to contact me.

:nabble_face-with-open-mouth-vomiting-23x23_orig:

With a copilot like that in the cockpit, the passengers have to pray that their commander pilot will stay alert and healthy for the whole flight.

Link to comment
Share on other sites

With a copilot like that in the cockpit, the passengers have to pray that their commander pilot will stay alert and healthy for the whole flight.

Right! You sure don't want to have to let Copilot fly. You'd end up at the wrong destination. :nabble_smiley_cry:

Link to comment
Share on other sites

Interesting. But, at least now, there's a way to disable it.

I do foresee a day when AI can, or maybe I should say "will", do things like I've been trying to do. Such as parse a document, meaning the document that's open or in the link, and give feedback on how it can be improved. (That feature is already available via opening the document in Word itself, but not if you just give it a link to the document.) And scanning a website to recommend ways of reorganizing it. Or comparing statistics that it can glean itself regarding traffic on a forum. And remembering what it learns or is taught.

But right now it feels like we are talking to an extremely smart baby that has no understanding and no realization that it is supposed to follow ALL of the instructions it has been given. Having done a lot of coding, I can't imagine trying to write a program when the CPU doesn't honor all of the instructions. But that appears to be where we are.

Link to comment
Share on other sites

Interesting. But, at least now, there's a way to disable it.

I do foresee a day when AI can, or maybe I should say "will", do things like I've been trying to do. Such as parse a document, meaning the document that's open or in the link, and give feedback on how it can be improved. (That feature is already available via opening the document in Word itself, but not if you just give it a link to the document.) And scanning a website to recommend ways of reorganizing it. Or comparing statistics that it can glean itself regarding traffic on a forum. And remembering what it learns or is taught.

But right now it feels like we are talking to an extremely smart baby that has no understanding and no realization that it is supposed to follow ALL of the instructions it has been given. Having done a lot of coding, I can't imagine trying to write a program when the CPU doesn't honor all of the instructions. But that appears to be where we are.

Yes, fortunately it isn't written into the bios or something...

But do note up-thread somewhere the hard coded shortcut key that's shipping today.

If Copilot is a petulant child that refuses to follow directions and lies constantly should it really have a place on my keyboard?

Link to comment
Share on other sites

Yes, fortunately it isn't written into the bios or something...

But do note up-thread somewhere the hard coded shortcut key that's shipping today.

If Copilot is a petulant child that refuses to follow directions and lies constantly should it really have a place on my keyboard?

I bought this Surface Pro 9 just before they announced the 10 with the Copilot key, so it will be years before I'm ready to buy a new PC. Hopefully by then it will work correctly, meaning follow directions and not make stuff up.

Link to comment
Share on other sites

I bought this Surface Pro 9 just before they announced the 10 with the Copilot key, so it will be years before I'm ready to buy a new PC. Hopefully by then it will work correctly, meaning follow directions and not make stuff up.

https://arstechnica.com/information-technology/2024/01/ai-poisoning-could-turn-open-models-into-destructive-sleeper-agents-says-anthropic/

Link to comment
Share on other sites

That's scary! But what it is saying makes sense. Especially the closing statement:

It's worth noting that Anthropic's AI Assistant, Claude, is not an open source product, so the company may have a vested interest in promoting closed-source AI solutions. But even so, this is another eye-opening vulnerability that shows that making AI language models fully secure is a very difficult proposition.

And it fits with what I've seen - that at least Copilot will not adhere to the guidelines you give it. Period. So you cannot trust the results. Ask Michael Cohen.

Link to comment
Share on other sites


×
×
  • Create New...