Imagination: S.T. Coleridge, C.S. Lewis, J.R.R. Tolkien

Day after day, day after day,
We stuck, nor breath nor motion;
As idle as a painted ship
Upon a painted ocean.
Water, water, every where,
And all the boards did shrink;
Water, water, every where,
Nor any drop to drink.

Samuel Taylor Coleridge, The Rime of the Ancient Mariner

“Like Ferdinand Magellan’s great voyage in pursuit of new passages to new worlds, since the seventeenth and eighteenth centuries the Western intellectual tradition has sailed far and wide and discovered many wonderful things. But just as Magellan faced great risk navigating the treacherous waters around Cape Horn, the tumultuous intellectual waters of the modern West presented great risks too. How would we find our way?

Samuel Taylor Coleridge captures the picture well in his poem The Rime of the Ancient Mariner. Having been guided through the treacherous waters and early-modern fog by the Albatross (the soul), the captains of the new world became annoyed by the bird. In hubris, they killed it and hung it as a trophy around the neck, believing it a good thing that they had killed it. By the mid-twentieth century, the ship of modernism stalled. It is now adrift in a postmodern sea, bobbing like a cork with no soul to guide it.”

I wrote these words in my book, Rediscovering God’s Grand Story: In a Fragmented World of Pieces and Parts. As is obvious, my intention in writing these words was to draw attention to how the modern world has become adrift like the Ancient Mariner. It has. I repeat them now for a different purpose. To raise the question of what imagination is.

This is not an easy question to answer because the answer varies by the period and the thinkers considered. Though all summaries suffer from imprecision, I’ll nevertheless make a few sweeping generalizations. I’ll say there are four broad periods for considering what imagination is (or means): 1. The classical period from the ancient Greeks (especially Plato and Aristotle (especially his De Anima, book III)) and Romans up through the Neo-Platonists; 2. The Christian period from St. Augustine through the Middle Ages (including the mystics) up to the Renaissance; 3. The Romantic period, broadly from Goethe, Blake, Keats, Coleridge, Wordsworth, Byron, Shelley, Browning, up to and including, with a kind of Classical/Romantic period twist, Owen Barfield, Lewis and Tolkien; and 4. The Modern period including such poets as Hardy, Pound, Yeats, Eliot, Stevens, Cummings, Frost, and so on. I could add a fifth, the Postmodern period, but for this paper I won’t address that period.

 The place or role of imagination differs through these periods. In the ancient period, arising from the Greek philosophers, imagination (phantasia: image-making capacity of human beings) is largely a faculty related to perception resting on “images” (including images occurring in dreams) instead of reason – in De Anima Book III.3, Aristotle says imagination (phantasia) is “that in virtue of which an image occurs in us.” Plato definitely viewed dialectical reason to be superior to imagination. Because it could be faulty it was not considered as valuable as the rational faculties. Art for the ancient Greek philosophers was not harmful as such (though it could be) but it doesn’t lead to the good life. Art for Plato is mimesis, imitating life as in plays, and to the extent it achieves its goal of imitating life well, it can evoke the emotions and feelings of the audience for good or ill. Historian J. M. Cocking[1] says, “For Plato, art improves in status as it moves away from naked feeling to moral judgment and uses its spell-binding power to promote virtue rather than to awaken thrills.” In this critical Platonic sense, Imagination was looked upon with a kind of wariness or suspicion. Unlike Wordsworth in the nineteenth century, who claimed imagination is a doorway into transcendent knowledge, this was not the case for Plato. The ancient Greeks did not consider imagination without any epistemic value, only that it is prone to error. Shades of mystery and this attitude prevailed largely through the Middle Ages. Cocking says: “For Aristotle it is a puzzling, intriguing aspect of mental life, hard to relate to the other parts of the soul—neither sensation nor thought, but somewhere between the two (De anima 432b). There Aristotle consistently puts it; and there it was to stay, in the rationalist West, until the Renaissance changed both its nature and its status.”

Sort of. The Neo-Platonists came to view imagination as a vehicle for the soul. Neo-Platonism significantly influenced the Church Fathers and triggered some heresies (Arianism[2]).  St. Augustine imbibed deeply in Neo-Platonism before his conversion (including following Manicheism[3]). But it was later, as Augustine the theologian who, thinking through these complexities, set a moderated path. For him, human beings recognize in our souls that we are made in the image of God, but this recognition doesn’t happen except through knowledge.

Knowledge for Augustine is always above imagination, and abstract knowledge is repeatedly distinguished from mental images. What the mind presents to itself in the form of images is as ethically suspect to Augustine as it was to Epictetus and the later Stoic moralists (City XI.26). . . .Augustine distinguished three kinds of seeing—bodily seeing, which is sensation together with consciousness of sensation, the mental representation or ‘spiritual seeing’; ‘spiritual seeing’ on its own, without sensation, which includes what we now call imagining and dreaming; and ‘intellectual seeing’ or understanding. (Cocking, p. 43)

 In the late fifth/early sixth century, Boethius establishes an integrated relationship between philosophy and imagination. Unavailable to St. Augustine (by all accounts), Boethius had access to Aristotle as well as the Platonists and Neo-Platonists. Lady Philosophy in The Consolation of Philosophy invites Boethius to use his imagination to look at historical figures and poets, and judge them with philosophical judgment for approval by the philosophical imagination.

In the Christian West, the relation between philosophy and imagination largely remained a combination of St. Augustine and Boethius until the thirteenth century, when St. Thomas and Bonaventure came along. In his Summa Theologica (ST I, 78, 4.), St. Thomas, a Dominican, describes imagination “a storehouse of forms received through the senses.” Bonaventure, a Franciscan, though not contrary to St. Thomas, stressed the mystical power of imagination: “when [the mind] knows any-thing, [it] journeys from Christ’s humanity to his divinity. Shining the divine light of illumination on imagination’s phantasms, Christ manages the transition from sensory to intellectual cognition.”[4]

As the Renaissance emerged in Northern Italy[5] in the fifteenth and sixteenth centuries, with respect to the meaning and place of imagination two things occurred: there was a strong desire to recover the classical world; and literary theory and criticism became a flourishing branch of literature. With the backdrop of all the dimensions of classical philosophy, theology, and medieval and renaissance literature, and with a marked move to a human-centered outlook coupled with growing resistance to received authorities, new views of the meaning and place of imagination developed. Creativity as a self-conscious deliberative process began to occur.

Then in the late eighteenth and early nineteenth century in Britain a new explicitly imagination-driven movement occurred: the British Romantic Movement. The roots of this movement are in Germany with Wolfgang von Goethe. It was imported to Britain when Samuel Taylor Coleridge and William Wordsworth went to Germany and brought it back home. Coleridge was the first to write a systematic theory of imagination in his Biographia Literaria.

Coleridge says that the world is made of symbols and we all relate to them – our organs of sense are designed to correspond to the world of sense. In like manner, our organs of spirit are designed to correspond to the world of spirit, but they are not developed in all alike. “But they exist in all, and their first appearance discloses itself in the moral being.” It is the job of human beings to discover what these symbols are and what they mean. The question is, who set these symbols? Is it just us, with our own imaginations? Is it just God? “Or are these symbols created by a continuous meeting between His imagination and ours?” Coleridge says the latter.[6]

C. S. Lewis follows in Coleridge’s footsteps (“a kind of expanded Christian Romanticism”) though he does not subscribe to all things Coleridge. For him, “imagination is not simply the holding of images in one’s head, nor it is simply creative invention. It is something else, something twinned, but opposite, to reason.” As with Plato, Aristotle, Augustine, and Aquinas, imagination is not good de facto. It can be good or evil. But it must be married with reason. In his essay, “‘Bluspels and Flalansferes’, Lewis discusses the issues of metaphor. His concern is to show that metaphors are at once built-in to the very words we use on a daily basis, but that new ones may also be created.” Shades of Coleridge. But he also says in that essay that reason is the organ of truth and imagination is the organ of meaning. But they do not exist one without the other. This is likely an influence of both Coleridge and Lewis’s close friend Owen Barfield. After the now infamous stroll down Addison’s Walk that evening after dinner with his friends J. R. R. Tolkien and Hugo Dyson, because of what Tolkien told Lewis about myth it would add a new dimension to what Lewis thought about imagination.

After that walk Tolkien went home an wrote his poem “Mythopoeia.” This would be what opened Lewis up to the truth of the Christian Gospel. “Tolkien argues again and again that sight and true sight, is a necessary outcome of the imagination. He also argues in the poem that this right [sic, sight?] of what he will call fantasy, is one inherent to us, even when we misuse it.” Here we see, as with the ancients, there is a caution in imagination. In his “On Fairy-stories,” Tolkien goes on to add another dimension to his view of imagination: human beings are sub-creators; we make things out of what has been made by God. This capacity allows us to cast worlds, to imagine worlds that don’t exist – linked to the Latin vert facere, to make. To make in the sense that an artist makes, a writer creates a story. For Tolkien this capacity is a reflection in man of the God who created him, whose story he lives within. Our sub-creative capacity is a mark of God in us.    

Additional links added for June 15:

85 Years Ago Today: J. R. R. Tolkien Convinces C. S. Lewis That Christ Is the True Myth (thegospelcoalition.org)

On Fairy-Stories : J. R. R. Tolkien : Free Download, Borrow, and Streaming : Internet Archive


Footnotes:

[1] Imagination: A study in the history of ideas, London: Routledge, (1991)

[2] Arianism is first attributed to Arius (c. AD 256–336), a Christian presbyter who preached and studied in Alexandria, Egypt. Arian theology holds that Jesus Christ is the Son of God, who was begotten by God the Father with the difference that the Son of God did not always exist but was begotten/made before time by God the Father; therefore, Jesus was not coeternal with God the Father, but nonetheless Jesus began to exist outside time.

[3] Manichaeism founded in the 3rd century AD by the Parthian prophet Mani (216–274 AD), in the Sasanian Empire. It is a form of Gnosticism. Manichaeism teaches an elaborate dualistic cosmology describing the struggle between a good, spiritual world of light, and an evil, material world of darkness. Mani as the final prophet after Zoroaster, the Gautama Buddha and Jesus Christ.

[4] Michelle Karnes, “Imagination in Bonaventure’s Meditations,” Imagination, Meditation, and Cognition in the Middle Ages. University of Chicago Press (2011). Available at https://www.degruyter.com/document/doi/10.7208/9780226425337-007/html?lang=en.

[5] The Renaissance began in the fourteenth century with humanism and reached its zenith in the late fifteenth through the sixteenth century. Key figures of the renaissance include: Lorenzo de Medici, Niccolò Machiavelli, Leonard de Vinci, Michaelangelo, Raphael, Francis Bacon, Nicolaus Copernicus, Petrarch, William Shakespeare, Christopher Marlowe, Marsilio Ficino, Pico della Mirandola, Galileo, Christopher Columbus, Ferdinand Magellan, Hernán Cortés, Desiderius Erasmus, Thomas More.

[6] David Russell Mosley, “Toward a Theology of the Imagination with S.T. Coleridge, C.S. Lewis, and J.R.R. Tolkien,” 13 April 2020; Accepted: 7 May 2020; Published: 12 May 2020, Religions, MDPI, available at https://www.mdpi.com/2077-1444/11/5/238.  

Memorial Day Reflections

It was a typical scorching hot summer day in June of 1970 as I stood having meaningless chatter with some high school buddies in the cool of the shade from the big cottonwoods on the grounds of the Pleasant Oaks Recreation Center in Pleasant Grove where we grew up. It was just a short bike ride away from the Prairie Creek beds where in earlier years we explored the world, got into mischief, and played army in our Stand By Me days. A stocky gray silhouette appeared against the bright sunlight walking toward us from McCutcheon Lane. We all squinted. “Really?” we wondered. It was, it really was. Our old friend David Broach was back home. We couldn’t believe it! We hadn’t seen him since he dropped out of school and joined the Army. Everyone’s attention shifted to David as he walked up. There he was, standing by us. He’d been dropped in the jungle and survived his tour in Viet Nam and had come back home. We were so glad to see him, so glad he was home. We wanted to hear all about it. But he was quiet. He seemed glad to see us, but he didn’t have much to say. As we cajoled him, it was apparent he was preoccupied. We asked why. He finally he acceded. “My Buddies,” he said. “My mind is on my Buddies 9000 miles away from where we’re standing, and the new ones arriving every day.” Then without fanfare, he told us. “I re-upped. I’m going back.” “Why would you do that, David? Why?” we all asked in astonishment. He said, “I’ve been there and survived. The hardest part of survival is the first 5 minutes after jumping off the helicopter. If you can survive that, you have a chance. I know what to do and the green soldiers don’t. If I have a chance to save one brother with my experience, I’ve gotta do it.” Subdued, we continued our meaningless chatter then went our separate ways. I watched for a moment as David’s presence became a silhouette again.

A month later he was back in-country. Two weeks later word came: Specialist Fourth Class E4 from Co. K, 75th Infantry Lrrp, 4th Infantry Division, Earl David Broach was dead. I didn’t “see” David again until 1992. When working in DC, I walked across to the Mall, to the Viet Nam Memorial. As I strolled solemnly along the long black granite wall searching through the endless names, I finally saw David again. I ran my fingers across the etched name of my friend. There he was with 58,190 other names of the American boys (and girls) who went to war in Viet Nam (now over 58,300 names), and never came home again to their moms and dads, brothers and sisters, and friends.

David is in my thoughts tonight as I reflect on Memorial Day, and so many more. I think of my father, a Navy man, sitting on a lonely stool in a crowed bar on shore leave in Algiers in 1943. And my father-in- law that same year eating that “rare, unexpected, amazing” chicken dinner in his Chabua barracks at the base of the Himalayas with his buddies as they prepare to fly the treacherous “Hump” again the next day. I think of Daddy Claude, my grandfather, and his son, my uncle Carroll, who serendipitously met up in the South Pacific while serving respectively aboard the destroyer U.S.S. Heermann and battleship U.S.S. Saratoga—that same especially dark year in the Pacific of ’43. I’m reminded of my grandfather’s double sacrifice, his second call to duty, having first mustered into the Army 26 years earlier in 1917 during the Great War. I think, too, of my second cousin, Josh Wilson, and his wife Sara, both West Point grads and veterans of the Iraq War, and of my nephews, Retired Air Force Major Robert Babb and Michael Bond—a multi-tour veteran in Afghanistan and the terrible PTSD he struggles he’s left with. I think about so many others in my family who served in the various Services and the Guards and the countless others in our country who have served and sacrificed so much for us through the years.

As I reflect, I am struck by those famous words by President Lincoln on that solemn day in 1863, when on that short ride from Washington to Gettysburg he penned the words “that last full measure of devotion,” as he thought of those on both sides who gave their lives on that now infamous battlefield. It rings in my ears as I think of David, and for those many who at this very moment are taking off or putting on their specially fitted legs and arms, and for those who feel as though there is nothing left to live for having lost their way back here at home, no longer having the comradery of the “brotherhood” of war.

What more noble and courageous thing can one do than lay down his life for his friends, for his family, for his country? What more ignoble thing can human beings do to each other than go to war? We see the best of ourselves in the sacrifice given; we see the worst of ourselves when we kill each other. War is hell!

I think of David and all my family members who gave so much to this country and then I watch the TV these days. I think of George Washington, and the advice he gave his fellows in his Farwell Address in 1796. He says a couple of things we so easily forget these days, words of challenge to ensure the preservation of the union for which so many sacrificed so much. We must always remember that we are one (e pluribus unum as the original American motto said). And that to keep that union we are called to goodness. We must be a people of moral virtue. When we fail at these our sacrifices become lost in a fog. We lose our way. It’s worth revisiting President Washington’s wise advice in his Farewell Address.

The unity of government which constitutes you one people is also now dear to you. It is justly so, for it is a main pillar in the edifice of your real independence, the support of your tranquility at home, your peace abroad; of your safety; of your prosperity; of that very liberty which you so highly prize. But as it is easy to foresee that, from different causes and from different quarters, much pains will be taken, many artifices employed to weaken in your minds the conviction of this truth;

. . .

The name of American, which belongs to you in your national capacity, must always exalt the just pride of patriotism more than any appellation derived from local discriminations. With slight shades of difference, you have the same religion, manners, habits, and political principles.

. . .

I have already intimated to you the danger of parties in the State . . .. Let me now take a more comprehensive view, and warn you in the most solemn manner against the baneful effects of the spirit of party generally.

This spirit, unfortunately, is inseparable from our nature, having its root in the strongest passions of the human mind. It exists under different shapes in all governments, more or less stifled, controlled, or repressed; but, in those of the popular form, it is seen in its greatest rankness, and is truly their worst enemy.

The alternate domination of one faction over another, sharpened by the spirit of revenge, natural to party dissension, which in different ages and countries has perpetrated the most horrid enormities, is itself a frightful despotism. But this leads at length to a more formal and permanent despotism. The disorders and miseries which result gradually incline the minds of men to seek security and repose in the absolute power of an individual; and sooner or later the chief of some prevailing faction, more able or more fortunate than his competitors, turns this disposition to the purposes of hisown elevation, on the ruins of public liberty.

And then he says,

Of all the dispositions and habits which lead to political prosperity, religion and morality are indispensable supports. In vain would that man claim the tribute of patriotism, who should labor to subvert these great pillars of human happiness, these firmest props of the duties of men and citizens. … A volume could not trace all their connections with private and public felicity. Let it simply be asked: Where is the security for property, for reputation, for life, if the sense of religious obligation desert the oaths which are the instruments of investigation in courts of justice? And let us with caution indulge the supposition that morality can be maintained without religion. Whatever may be conceded to the influence of refined education on minds of peculiar structure, reason and experience both forbid us to expect that national morality can prevail in exclusion of religious principle.

It is too easy to garner sacrifice under the banner of excessive partisanship, grievance, and revenge and thereby tarnish patriotism. Good judgment becomes clouded and the exercise the power becomes unbound from its proper restraint. Patriotism and “Party-ism” not underwritten by the centripetal force of virtue makes everything fall apart, as Yeats captures in his poem The Second Coming:

Turning and turning in the widening gyre

The falcon cannot hear the falconer;

Things fall apart; the centre cannot hold;

Mere anarchy is loosed upon the world,

The blood-dimmed tide is loosed, and everywhere

The ceremony of innocence is drowned;

The best lack all conviction, while the worst

Are full of passionate intensity

Patriotism must be born of virtue in service of the common good, underwritten by the “religious principle.” The due honor for the sacrifices of my friend David Broach and the many who have sacrificed so much for this country derives from virtue in pursuit of the common good of all. As Washington urged us to remember: “Of all the dispositions and habits which lead to political prosperity, religion and morality are indispensable supports. In vain would that man claim the tribute of patriotism, who should labor to subvert these great pillars of human happiness.” As Alexis de Tocqueville observed, “America is great because she is good. If America ceases to be good, America will cease to be great.”

On this Memorial Day, let us honor those who have given that last full measure of devotion … and who did so in the constant belief that their sacrifice was in the service of the good. Let us give honor to those who sacrificed so much by giving themselves in pursuit of the good of the One who is the source of the Common Good, the One in whom is the basis of our liberty and the tie that binds the many into one.

God Bless those whose sacrifice secured and sustain our liberty, including my friend David. God, bless us all in such a way that our greatest happiness should be found in sacrificing ourselves in pursuit of Your goodness; for when we fail at this, we whitewash the sacrifices of those we memorialize on this day.

Copyright James M. Roseman

Artificial Intelligence

The world is awash with AI these days. ChatGPT grabbed the headlines last year. With Microsoft’s one-billion-dollar investment in Open AI and its Bing CoPilot and Google AI’s Gemini, you use AI every time you search the web. If you are on LinkedIn (a Microsoft company), every other post is about AI and its use in big business, and AI is embedded in the LI platform itself—for job recommendations, image credits, people you may know, profile building and more. Meta’s Facebook and its Messenger, Instagram, and WhatsApp suite of services use AI to analyze every picture and video you look at, every “like” you click or post you make, all under the banner to improve your “experience.” But from a business point of view, for Meta and all social media companies, the data and AI driven analytics are used either to sell commercial advertisements and to mine and monetize for additional revenue. Every social media App and platform uses AI. Every major business uses AI for core business purposes and for running the business: from choosing or changing the core business model to analyzing and charting strategy; to operate and secure the business; to solve and accelerate the ever changing needs of the IT infrastructure, both the fixed and on-demand requirements using public/private and on-premise/off-premise cloud computing and Edge computing; to develop new products and services, including advancing the Internet of Things (IoT); to manage the ever more complex global 24/7-365 suite of business and customer interface applications and for analyzing, managing and performing user support (inside the business and for its customers), to reduce errors and “enhance the experience.” In the world of healthcare and Big Pharma AI has become part of the core business for research and management: e.g., in diagnostics and robotics, in research and discovery of new drugs, such as the COVID-19 mRNA vaccines. Generative AI is the fastest growing tool in every pharmaceutical company in the world. And AI is sweeping the worlds of agriculture, logistics and supply chain management, law, national security, geopolitics and warfare. It is simply everywhere all the time. It is in every sphere of life: media, universities (especially in research programs and scholarly work), economic forecasting and central banks, and in every government and defense department in the world. And every entity that uses AI for their own positive purposes—social, economic, political, and defense, etc.—must protect itself from bad actors who use AI for nefarious purposes.

In the 1940s, mathematicians and pioneers of modern computers Alan Turing (the famed WWII codebreaker) and John van Neumann had come to believe that computers and the human brain were analogous. Given this analogy, to them it seemed obvious that “human intelligence could be replicated in computer programs.” (Melanie Mitchell, Artificial Intelligence (2017)) In 1956 Dartmouth mathematics professor John McCarthy and some close friends and colleagues formed a two-month long 10-man workshop to explore creating a thinking machine. McCarthy said he needed a name for the purpose of the workshop, so he coined the term “Artificial Intelligence,” though he later admitted that his originally convening colleagues—Marvin Minsky, Claude Shannon (inventor of information theory) Oliver Selfridge, Ray Solomonoff, and Peter Milner—didn’t like the term. The reason, according to McCarthy’s biographical memoir, was because the actual goal was to create a machine with genuine intelligence not artificial intelligence. With that goal in mind and ever since, the raison d’être of AI research has been to achieve machine intelligence equal to or exceeding human intelligence.

So, what exactly is AI? AI is a complex system of hardware and software technologies built to compile and analyze extremely large data sets very quickly using complicated algorithms1 to produce pattern-based predictive and “creative” guess outcomes, “insights,” and actions (as in robotic and automated physical machine systems designed to mimic human behavior). AI systems are built on how neuroscientists understand the brain works. Here’s how: Neurons receive a vast array of perceptions (sometimes unconscious) that trigger electrical and chemical inputs connected by synapses. The neuron weighs the inputs and when they reach a “threshold,” the synapse fires. Synapses have different strengths that create stronger and lesser connections, thus certain inputs get greater or lesser weight. “Neuroscientists believe that adjustments to the strength of connections between neurons is a key part of how learning takes place.” (Mitchell, 24) AI systems are built on this understanding with a similar but simplified structure using what are called “perceptrons”: perceptrons receive an array of inputs with different assigned (programmed) weights; these are added together and when the sum reaches an assigned (programmed) threshold, the output yes or no (0s and 1s) renders the “decision.” Repeating this over and over again using algorithms, the computer “learns.” This is what generative AI does with what are called large language models (LLMs) and neural network models (Deep Learning). This method is intended to imitate human thought, cognition, learning, reasoning, and creativity based on prevailing theories of how the human brain, cognition, and imagination work as a complex network of physical and biological processes.

A lot has happened in the world of computing and AI since 1956. But suffice to say, here’s where we are today. IBM breaks it down this way. There are three kinds of AI: Artificial Narrow AI; General (or Strong) AI; and Super AI. Only the first, Artificial Narrow AI, exists today. The other two are part of the McCarthy and Co. dream imagined those many years ago.

Within these three kinds, IBM identifies four functional types: Reactive; Limited Memory; Theory of Mind; and Self-Aware. Only Reactive and Limited Memory exist today.

IBM says, “Reactive machines are AI systems with no memory and are designed to perform a very specific task. Since they can’t recollect previous outcomes or decisions, they only work with presently available data. Reactive AI stems from statistical math and can analyze vast amounts of data to produce a seemingly intelligence output.” Examples of Reactive AI are IBM’s Deep Blue winning again chess grandmaster Garry Kasparov in the late 1990s and Netflix recommending movies based on what you have watched previously.

Limited Memory AI can recall past events and outcomes for specific moment decisions for a short period of time, but it cannot store memory in a library for recall and use later. However it can be trained, it can learn (this is what is called machine learning and Deep Learning). Generative AI tools such as ChatGPT, Google’s Gemini and DeepAI “rely on limited memory AI capabilities to predict the next word, phrase or visual element within the content it’s generating.” Virtual assistants like Siri, Alexa, Google Assistant, and self-driving cars are examples of Limited Memory AI. Reactive and Limited Memory AI exist today in varying ways.

Theory of Mind AI and Self-Aware AI do not exist. They fall under General AI and Super AI. Theory of Mind “functionality would understand the thoughts and emotions of other entities. This understanding can affect how the AI interacts with those around them.” Self-Aware AI “would possess super AI capabilities. Like theory of mind AI, Self-Aware AI is strictly theoretical. If ever achieved, it would have the ability to understand its own internal conditions and traits along with human emotions and thoughts. It would also have its own set of emotions, needs and beliefs.”

Reactive and Limited Memory AI are everywhere today and expanding fast. The 2023 Stanford Emerging Technology Review says Generative AI alone “is estimated to raise global GDP by $7 trillion and lift productivity growth by 1.5 percent over a ten-year period, if adopted widely.” As mentioned above, the use of AI is in every industry and every sector of the economy, in governments, law, geopolitics, and national defense and warfare. The take-up of these two types of AI is so fast it is impossible to keep track.

It is indubitable that Reactive and Limited Memory AI offer huge efficiencies and technical capability advantages—e.g., in medical diagnostics and research; in data access and analysis for a host of beneficial uses, etc. But even in these are buried questions of goodness or evil—e.g., are efficiency and the ability to crunch more data faster a moral good as such? Is the technologization of everything a societal good? Others argue that it’s not merely the AI technology itself that is of concern, but rather what it takes to create AI—i.e., it’s the ecological impact through extraction of core materials mining, the abusive sweat shops required to create devices, and the electrical power requirements necessary to run the data centers full of servers, etc. (See Kate Crawford, Atlas of AI, (2021)) Nevertheless, the genie is out of the bottle. Whether these two currently realized dimensions of AI are a good thing or a bad thing, they have already and will continue to affect every sphere of life. We must think wisely about them.

But none of the concerns from these realized dimensions of AI are even the penultimate concern. To see this, one need only to turn back to Alan Turing and John van Neumann’s view that “human intelligence could be replicated in computer programs,” and the Dartmouth workshop vision to achieve genuine intelligence in machines not artificial intelligence. That is, the penultimate concern is the Theory of Mind and Self-Aware AI. This concern is what prompted many of the AI pioneers and current leaders of AI recently to sign an open letter and testify before Congress urging caution. Here lies the very legitimate transhuman concern of AI.

The ultimate concern, which is never talked about but which clouds good judgment on the AI pursuit in general is the materialist metaphysic and the computational theory of mind that are embedded in it. This metaphysic is an inheritance from the Enlightenment and the computational theory of mind is the inheritance of the host of new philosophies and sciences that grew up in the late nineteenth and first half of the twentieth century—e.g., neuroscience, neurophilosophy, neurobiology, cognitive science, linguistic science, and evolutionary psychology, etc.

AI is the “embodiment” of the modernist naturalistic metaphysic that material is all there is. It declares there is no activating “form” (soul), no activating potentiality into actuality, as Aristotle said. In the materialist (physicalist) theory of mind, everything is merely physical. There is no difference at all between the human person or the human mind and a sophisticated computing machine. There is only some not yet fully understood biophysical animating event which makes, for example, a mere body a living conscious organism. Consciousness, in this metaphysic, is only an epiphenomenal conjunction of matter. In this metaphysic, the “self-aware” and “super” AI become the equivalent of a conscious human being. The term “consciousness” loses its meaning. The living being loses its soul.

Technologies are built on point-in-time referents to extant thought and discoveries. As such, technologies are the instantiation or the embodiment of particular ideas and scientific discoveries. Technologies are unwitting “carriers” of these ideas. The technologies we use are not inert. They are purveyors of the ideas and discoveries instantiated in them. You might say, AI theory has at its core a kind of intellectual malware: a materialist computational theory of mind.

The ultimate question for AI is the meaning of the human person and human consciousness. The current scientific and AI theories cannot account for the soul (the psūchê in Greek, the nephesh in Hebrew). They have no soul (or form), the animating substance which makes a mere body a living conscious organism. Neuro-biophysical processes alone cannot account for this. Aristotle’s (and Aquinas’s) hylomorphic theory still offers the most integrative compelling explanation.

The great confusion created with such a reductionist materialist metaphysic is to assume that if one understands how something works, they understand what the phenomenon is. As Socrates put it to his friend Cebes in the Phaedo, “There is surely a strange confusion of causes and conditions in all this.” Cebe’s doesn’t understand why Socrates doesn’t try to escape his imprisonment. Is it his body that is the cause of his condition? Socrates explains it is not his body that is the cause—his state is not determined by his body. Socrates explains “the Athenians have thought fit to condemn me, and accordingly I have thought it better and more right to remain here and undergo my sentence.” The true cause is Socrates’s will that keeps him in prison, his desire to live virtuously. To be only his body is not to be his whole self. This is the same confusion philosopher Roger Scruton identifies in his book The Soul of the World (2014, 37–38) when he speaks of the difference between what the acoustician and the musician hear in sound. Beethoven hears life as a symphony of musical space; the acoustician hears mere pitched sound.


  1. What are algorithms? It’s a ‘recipe’ of steps a computer takes to solve a particular problem.” (Mitchell, 28). “An algorithm is a set of defined steps designed to perform a specific objective. This can be a simple process, such as a
    recipe to bake a cake, or a complex series of operations used in machine learning to analyze large datasets and make predictions. In the context of machine learning, algorithms are vital as they facilitate the learning process for
    machines, helping them to identify patterns and make decisions based on data.” What is an Algorithm? Definition, Types, Implementation | DataCamp. The process works like this: Data Inputs; Processing that data – this is the core
    function, where the processing steps (the logical and arithmetical calculations occur, the written algorithm) are done and repeated in a loop until the problem is solved; resulting in an output. Think of how the thermostat in your house works: temperature is received into the sensor (the input); the thermostat processes (calculates) the input according to the algorithm; if the temperature is lower/higher than the setting, the thermostat triggers the air conditioner or heater to turn on (the output). Once complete (problem is solved), it stops. This is the final step. ↩︎