ArdWrknTrk Posted January 31 Share Posted January 31 We shouldn't be surprised with any of that. I've not had Edge take over my Chrome sessions or searches, but have had plenty of times when it asked me to make Edge my preferred browser. As for it making up the time, I like the statement of "and sometimes factuality comes second". And if you are playing with a giant child in the sky why would you expect it to keep your thoughts private? None of it surprises me but there seem to be an awful lot of people who should know better that believe it. Cohen is a fine example. As a lawyer who is looking at going to jail would you rely on some unsubstantiated babble from a chatbot and put it in front of a judge? Link to comment Share on other sites More sharing options...
Gary Lewis Posted January 31 Author Share Posted January 31 None of it surprises me but there seem to be an awful lot of people who should know better that believe it. Cohen is a fine example. As a lawyer who is looking at going to jail would you rely on some unsubstantiated babble from a chatbot and put it in front of a judge? The vast majority of people don't know that it is unsubstantiated babble. And therein lies the problem. I assumed that it wouldn't lie, but it absolutely does. And I assumed it would follow explicit directions, and it absolutely refuses to do so. It really needs to print a disclaimer that says "Everything I tell you is suspect as I make things up and cannot be bothered to follow your directions." But people wouldn't read that, so why bother. Link to comment Share on other sites More sharing options...
ArdWrknTrk Posted January 31 Share Posted January 31 The vast majority of people don't know that it is unsubstantiated babble. And therein lies the problem. I assumed that it wouldn't lie, but it absolutely does. And I assumed it would follow explicit directions, and it absolutely refuses to do so. It really needs to print a disclaimer that says "Everything I tell you is suspect as I make things up and cannot be bothered to follow your directions." But people wouldn't read that, so why bother. IOW, the VAST majority of people are complete idiots. Link to comment Share on other sites More sharing options...
Gary Lewis Posted January 31 Author Share Posted January 31 IOW, the VAST majority of people are complete idiots. I would say "uninformed". All of the hype in the news is about how it can produce fakes. As you've found there are writings out there that prove that it lies and won't follow directions, but you have to go looking for them. So unless you've played with it and caught it in the lies and/or not following directions you won't have been informed. Having said that, there sure are a bunch of idiots out there. Â Link to comment Share on other sites More sharing options...
ArdWrknTrk Posted January 31 Share Posted January 31 I would say "uninformed". All of the hype in the news is about how it can produce fakes. As you've found there are writings out there that prove that it lies and won't follow directions, but you have to go looking for them. So unless you've played with it and caught it in the lies and/or not following directions you won't have been informed. Having said that, there sure are a bunch of idiots out there. Maybe I'm just jaded and cynical? Maybe it's the autism? There's NO WAY I would ever put my fate in the hands of AI as it stands now. Link to comment Share on other sites More sharing options...
ArdWrknTrk Posted February 14 Share Posted February 14 Maybe I'm just jaded and cynical? Maybe it's the autism? There's NO WAY I would ever put my fate in the hands of AI as it stands now. https://arstechnica.com/information-technology/2024/02/amnesia-begone-soon-chatgpt-will-remember-what-you-tell-it-between-sessions/ Gary, you were saying there was no persistence of "thought" with Microsoft's chatbot. Link to comment Share on other sites More sharing options...
Gary Lewis Posted February 14 Author Share Posted February 14 https://arstechnica.com/information-technology/2024/02/amnesia-begone-soon-chatgpt-will-remember-what-you-tell-it-between-sessions/ Gary, you were saying there was no persistence of "thought" with Microsoft's chatbot. That proves what I experienced was real, not just perceived. And each chat has a finite limit, so just when you are "getting there" you have to quit and start over. But while having it remember what it learns from chatting with me would help, that means it doesn't know what it has learned when you ask a question. So somehow it needs to be able to connect things. And that gets scary! Link to comment Share on other sites More sharing options...
ArdWrknTrk Posted February 14 Share Posted February 14 That proves what I experienced was real, not just perceived. And each chat has a finite limit, so just when you are "getting there" you have to quit and start over. But while having it remember what it learns from chatting with me would help, that means it doesn't know what it has learned when you ask a question. So somehow it needs to be able to connect things. And that gets scary! All it takes is a neural network that constantly replenishes itself, and circuits get programmed like instincts in animals. Musk's neuralink is scary indeed! But when it's developed we'll find out if this is the matrix, or minority report. Huxley was more prescient than satirical. 🤯 Just like Judd was with Idiocracy! Link to comment Share on other sites More sharing options...
ArdWrknTrk Posted February 16 Share Posted February 16 All it takes is a neural network that constantly replenishes itself, and circuits get programmed like instincts in animals. Musk's neuralink is scary indeed! But when it's developed we'll find out if this is the matrix, or minority report. Huxley was more prescient than satirical. 🤯 Just like Judd was with Idiocracy! https://arstechnica.com/information-technology/2024/02/google-upstages-itself-with-gemini-1-5-ai-launch-one-week-after-ultra-1-0/ Link to comment Share on other sites More sharing options...
Gary Lewis Posted February 16 Author Share Posted February 16 https://arstechnica.com/information-technology/2024/02/google-upstages-itself-with-gemini-1-5-ai-launch-one-week-after-ultra-1-0/ "It's impressive to process documents that large, but the model, like every large language model, is highly likely to confabulate interpretations across large contexts. We wouldn't trust it to soundly analyze 1 million tokens without mistakes, so that's putting a lot of faith into poorly understood LLM hands." Link to comment Share on other sites More sharing options...
Recommended Posts