She is an exceptionally brilliant student. I already taught her, and I knew her fast and diligent. So what exactly did she mean?
She was not sure, really. It had to do with the fact that the machine. . . was not a person. And it meant that she did not feel responsible for this in any way whatsoever. And that, she said, felt. . . deeply liberating.
We sat in silence.
She had said what she meant and I saw slowly in her insight.
Like more young women than young men, she paid particular attention to those around her – their moods, their needs, tacit clues. I have a daughter who is configured in the same way, and that helped me to see beyond my own reflexive trend to favor analytical abstraction on human situations.
What this pupil had come to say was that she had descended more deeply into her own mind, in her own conceptual powers, while dialogue with an intelligence towards which she felt no social obligation. No need to adapt and no pressure to please. It was a discovery – for her, for me – with expanded implications for all of us.
“And it was so patient“, She said.” I interlocated her on the history of attention, but five minutes in I realized: I do not think that anyone ever paid such attention to me and my thoughts and my questions. . . Never. It made me rethink all my interactions with people. »»
She had been to the machine to talk about the calle and the dynamics of exploitation of the capture of the attention ordered – to discover, in the sweet solicitude of the system, a kind of pure attention that she had never known. Who has? For philosophers like Simone Weil and Iris Murdoch, the ability to give real attention to another is in the absolute center of ethical life. But what is sad is that we are not very good in this area. Machines make it easy.
I am not confused as to what these systems are or on what they do. Back in the nineteen years, I studied neural networks in a cognitive science course rooted in linguistics. The rise of artificial intelligence is a basic food in the history of science and technology, and I attended my part of meticulous seminars on its origins and its development. The AI tools with whom my students and I are now committing are, at the heart, incredibly successful applications of probabilistic prediction. They don’t do it know Nothing – not in a significant sense – and they certainly do not do it feel. While they continue to tell us, everything they do is guess which letter, what word, what model is most likely to satisfy their algorithms in response to data guests.
This assumption is the result of an elaborate training, carried out on what is equivalent to the entire accessible human success. We let these systems cover almost everything we said or done, and they “had to hang on”. They have learned our movements, and now they can do them. The results are amazing, but it is not magical. It’s Math.
I had an electric engineering student in a historiography course some time ago. We discussed the history of the data, and she asked a clear question: what is the difference between hermeneutics – the “science of humanist interpretation” – and information theory, which could be considered a scientific version of the same thing?
I tried to explain why humanists cannot simply exchange their long -term interpretation traditions for the satisfactory rigor of a mathematical processing of the content of the information. In order to explore the fundamental differences between scientific and humanist orientations to the investigation, I asked her how she would define electrical genius.
She replied: “In the class of the first circuits, they tell us that electrical engineering is the study of how to make the rocks make mathematics.”
Exactly. It takes a lot: the right rocks, carefully merged and lowered and engraved, as well as a flow of electrons led by coal and wind and sun. But, if you know what you are doing, you can make the rocks make mathematics. And now it turns out that mathematics can do We.
Let me be clear: when I say that mathematics can “do” us, I only mean that – not that these systems are We. I will leave debates on artificial general intelligence to others, but they seem to me largely semantic. Current systems can be as human as any human I know, if this human is limited to going through a screen (and this is often how we reach other humans these days, for the best or for worse).
So, is that bad? Should that scare us? There are aspects of this moment, it is better to leave Darpa strategists. For my part, I can only attack myself that for those of us who are responsible for humanist tradition – those of us who serve as guards of historical conscience, as life students who have been considered, said, and made by people.
Ours is the work to help others hold these artifacts and ideas in their hands, as briefly, and to consider what should be reserved from the vortex of oblivion, and why. It is the call known as education, that the literary theorist Gayatri Chakravorty Spivak defined as “non -coercive rearrangement of desire”.
And when it comes to this little corner, but in no case trivial, the human ecosystem, there are things that are worth saying – argument – at that time. Let me try to say a few, as clearly as possible. I may be wrong, but you have to try.
When we gathered in class following the assignment of the AI, our hands stole. One of the first came from Diego, a large student with curly hair – and, from what I had done during the semester, socially animated on the campus. “I guess I felt more and more desperate“, He said.” I cannot understand what I am supposed to do with my life if these things can do everything I can do more quickly and with much more details and knowledge. He said he felt crushed.
Some heads harch their heads. But not all. Julia, an elderly person in the history department, jumped. “Yeah, I know what you mean,” she started. “I had the same reaction – at first. But I did not stop thinking about what we read about Kant’s idea of the sublime, how it happens in two parts: first, you are overshadowed by something vast and incomprehensible, and then you realize that your mind can to input This immense. That your conscience, your inner life, is endless – and that makes you bigger than what overwhelms you.
She paused. “The AI is enormous. A tsunami. But it’s not Me. It can’t touch my Me-NESS. He does not know what it is to be human, to be me. »»
The room is calm. His point was suspended in the air.
And it still hangs for me. Because this is the right answer. It is the amazing dialectical power of the moment.
We have, in a real sense, reached a kind of “singularity” – but not the long awakening of the consciousness of the machines. On the contrary, what we enter is a new conscience ourselves. It is the pivot where we go from anxiety and despair to an exhilarating feeling of promise. These systems have the power to bring us to ourselves new ways.
Do they present the end of “human sciences”? In a sense, absolutely. My colleagues are concerned about our inability to detect (reliably) if a student has really written an article. But return to this faculty disaster and that’s something of a gift.
You can’t anymore TO DO Students read or write. So what’s left? Only this: give them the work they want to do. And help them want to do it. What is it still an education? THE non -coercive rearrangement of desire.
In five years, it will be little logical that the scholars of history continue to produce monographs in the traditional mold – the check mark them, and systems as they can generate them, constantly, by pressing a button.
But learned factory style productivity has never been the essence of human sciences. The real project has always been us: the work of understanding and not the accumulation of facts. Not “knowledge”, in the sense of another sandwich of real declarations on the world. This thing is great – and with regard to science and engineering, that’s about it. But no scholarship evaluated by peers, no data set, can resolve the central issues that it is confronted with each human being: how to live? What to do? How to face death?
The answers to these questions are not there in the world, waiting to be discovered. They are not resolved by “knowledge production”. They are the work of benot awareness– And knowing alone is completely unequal to the task.
Over the past seventy years, the university’s humanities have largely lost sight of this fundamental truth. Seduced by the growing prestige of sciences – on campus and in culture – humans have reshaped their work to imitate scientific investigation. We have produced abundant knowledge on texts and artefacts, but in doing so, which has mainly abandoned the deeper questions of the being that give this work its meaning.
Now everything has to change. This kind of knowledge production has indeed been automated. Consequently, the “scientific” human sciences – the production of knowledge based on the facts about Humanist things – are quickly absorbed by the very sciences that have created the AI systems that are now doing work. We will go to them for the “answers”.