Misconception: AI will take over all the mental routine, and we will create meanings
From series Robot vs human
- Misconception: AI will take over all the mental routine, and we will create meanings
Dmitry writes in Threads:
In my opinion, after the rise of AI as a technology, the main task of a person will be to correctly and time after time determine his point B. And this is key
I answer:
Recipe: If you add the acronym “AI” to the text, it becomes a little smarter! 😆People have always determined (in a certain number of attempts exactly correctly) point B and will continue to do so. Or are you saying that AI will finally work, and people will have nothing left to do as soon as point B is placed on the map (see. write the correct hint)?
Dmitriy:
Well, yes :) Yes, it’s more likely that there will be more focus on meaning, since much that now requires routine mental work will be transferred to AI
I:
Routine thinking will not be transferred, because the machine simply selects the most likely next word. This is not thinking. When thinking, a person uses a lot of input data - his own and others’ conscious experience, intuition, forecasting, emotions - a lot of everything.
“Meanings” - and immediately a man appears in a minimalist white office, sitting silently… and suddenly he writes an ingenious formula, a word, a drawing on a piece of paper and leans back with relief on the Herman Miller Aeron, smiling slyly. But in order to come to this “meaning” you have to think, but a machine cannot (and will not be able to) do this. But she can help us along this path by taking on really routine tasks with the expected result (the quality of which we can confirm).
Dmitriy:
I agree Do you work with GPT or Bard? I see that this has already been conveyed in essence. It may not be true, we represent human experience, but it simplifies it a lot.
I:
It just depends on what you mean by routine thinking of the language model. I would more accurately call this the word calculation. This is useful, but nothing new - computers have always done this. It’s just that today computing has appeared to the world in the simplest and most understandable chat interface. I myself find language models very useful. I work with GPT (only 3.5 so far) and Bard. Now I use it more for coding because I am building my website.
It’s good that the author first called the meaning point B. It’s easier to describe it that way. We cannot simply put point B on the “map” and say that it is correct. You need to know exactly where to put this point. This means that there is always thinking behind the meaning.
Someone will definitely write me a comment that it is possible to check the results of the AI’s work. Like, let him calculate everything, and we will only verify the result 1 - confirm that point B can be here. But to verify, we must know, and if we know, why do we need AI?
AI will not replace humans.
Andrey Konyaev also talked about this in the episode of the podcast “Seryozha and microphone" (link to example with building an ambulance route). ↩︎
- Misconception: AI will take over all the mental routine, and we will create meanings