Any sufficiently advanced technology...

Any sufficiently advanced technology...
An AI rendering of Sir Francis Bacon admiring the technology he helped inspire

There's an old saying: "any sufficiently advanced technology is indistinguishable from magic" which seems to hold true even now. There's another, slightly more modern, saying: "if it works, it's not stupid", which seems equally accurate.

From Symbols to Silicon: The Philosophical Origins of Computing and Artificial Intelligence

It's commonly accepted that the modern binary number system was created by Gottfried Leibniz in 1689. However, Sir Francis Bacon was toying with a precursor to Leibniz' binary in the early 1600s - with an additional twist. Bacon realized that "provided those objects be capable of a twofold difference only; as by Bells, by Trumpets, by Lights and Torches, by the report of Muskets, and any instruments of like nature", information could be encoded in imagery, fonts, or anywhere one could illustrate two contrasting variations of some type. Or more succinctly: "how to make anything signify anything."

This led to Bacon's system of Steganography (the term and practice of which had been documented in the 1400s by Johannes_Trithemius), or hiding information in plain sight. Subtle, encoded information could be placed in entirely banal sources, like shopping lists or poems, so that it was easily and automatically overlooked. While cryptography presents an enciphered and (ideally) unbreakable code, Steganography removes the assumption that there's any code to break, by remaining essentially invisible. Bacon's secrecy and "security through obscurity" relied heavily on symbolic representations to encode the information, and he repeatedly articulated the idea that anything could be used to represent anything else - symbolically. Symbolic representation of this sort also underpins many Kabbalistic practices.

Fast forward to the Victorian age, and we encounter Ada Byron Lovelace, the Enchantress of Numbers. While collaborating with Charles Babbage on his Analytical Engine, Lovelace posited a transformative idea: binary wasn't just useful for numerical calculations. It could encode any form of data, from letters and numbers to colors and sounds. This insight bridged the chasm between mere calculation and genuine computation. Lovelace's insight was essentially the inverse of Bacon's, as binary representing any form of information, rather than Bacon's encoding of information in any binary style representation, but both shared the same symbolic, encoding, and translation based underpinnings.

As I mentioned, a similar idea is found in systems of Kabbalah, one facet of which is gematria, or the use of Hebrew, Greek, or other alphabets that double as numbers, to encode words into a numeric value. This is fundamentally very similar to what we do now to create embeddings or features in machine learning and AI, particularly in large language models (ala word2vec and ChatGPT) which use phonemes, words, or phrases, in vector space, to determine relatedness of terms. This sounds a lot like the idea espoused in gematria, that "words that sum to the same value have the same or related meanings." The application has changed, but the underlying symbolic representation and translations, particularly between number and meaning, have endured, largely unaltered, for centuries.

It is generally thought that the human subconscious uses a sort of language, different from the explicit words of conscious thought, based largely in symbol. Given the neurological underpinnings of the mind, we might conclude that this process is very similar - a form of mathematical and geometric (spatially encoded) representation creating symbolic constructs that are then translated to and from conscious, spoken language. This seems likely, in fact, given that we continue to re-create similar systems of knowledge and representation repeatedly throughout human history. Modern AI is more intentionally and explicitly attempting to re-create and externalize the human mind and its hidden processes, but I would posit that humans have been unconsciously doing this for as long as symbols and writing have existed, in some form or another.

All of this underlies what we call abstract thinking - the ability to conceptualize something that is outside the here-and-now, which represents something other than any particular thing we can point at. I can think of the concept of "soda" separately from a drink in front of me, or any particular type of soda, as an abstraction of soda-ness. This allows me to formulate the idea of wanting a soda and imagine future scenarios of how to acquire one - and the more abstract the goal, the more likely I can find a match, such as allowing for both Ginger Ale or Root Beer to fall into the same category in case one isn't available. We see this principle in use as classes and types in code, the here-and-now being the instances that are conjured forth into being as needed, and duck typing as the lack of attachment to a particular class or type - so long as it works as needed for the task at hand - for a little extra Zen.

This same form of abstraction gives us the ability to do knowledge transference, or perform lateral thinking, so that we might take lessons learned in one specific knowledge domain and apply them elsewhere, to similar (but not exact) scenarios or processes. This is called generalization, and is implied in the term Artificial General Intelligence, hence why LLM based AI seems far more lucid and capable than prior iterations of AI.

By embedding language in such a way that "soda" is "near" other things related to soda-ness (drinks, fluids, thirst, etc) it can produce responses based on lateral traversals and correlations, which is the crux of intelligent problem solving, and is far more efficient and effective than navigating hierarchies of hyper/hyponyms such as WordNet provides. It may not be actual AGI, but the vector based embeddings are a huge step forward in enabling AI to process information more abstractly through proximal relationships on many axes, vs attempting to parse formal languages more literally and exactly (ie: "expert systems"). Many humans would do well to further develop this cognitive skill of abstract and lateral thinking as well, but I digress.

The Poetry of Code: Symbols, Logic, and the Artistry of Creation

In the vast mosaic of human expression, few arts are as simultaneously abstract and precise as coding. At its core, writing code is a symphony of symbols, a dance of logic and creativity, firmly grounded in the immutable laws of mathematics. But beyond the binary and algorithms, it's an art form of its own, and quite poetic in its own right.

Symbols, as we've understood, are the crux of language and cognition itself. They bridge the ethereal with the tangible, the abstract with the concrete. In coding, symbols come alive, not merely as representations but as active agents of transformation, which is the key difference between mathematics and programming - the active quality of the latter through execution which can create to tangible results. Each line of code, each function, and variable is a symbol, encoded with meaning. But more than that, they're a promise, an intent crystallized into action.

This act of creation is grounded in logic, the bedrock of all code. Logic, in its purest form, is distilled mathematics. It's the language of reason, of cause and effect, of if and then. Every piece of software, every application, every digital tool we interact with is a manifestation of this logic. But unlike pure math, which revels in its abstract beauty, coding takes this logic and gives it purpose, direction, and, most importantly, functionality.

Yet, to view coding merely as a logical endeavor would be to miss half the picture. For intertwined with this logic is artistry. Like a poet choosing just the right word to evoke an emotion, a coder selects the perfect algorithm, crafts the ideal data structure, or designs the most intuitive user interface. This artistry isn't capricious; it's grounded in the very essence of symbols. It's an understanding that symbols, whether in code or language, carry weight, meaning, and power. They can inspire, facilitate, and sometimes, in their most profound moments, transform.

The art of coding, then, is a confluence of symbol, logic, and creativity. It's where mathematics meets imagination, where the rigor of logic dances with the fluidity of art. It's a testament to humanity's insatiable curiosity, the drive to understand the mind and universe's language and, in doing so, create our own. To isolate technology and engineering from its philosophical and psychological (and even esoteric) roots is to neuter it and lose the underlying essence that gave rise to it. To lose this underlying context is to limit our comprehension and ability to generalize and apply this understanding more broadly. There's a common trope that philosophy is worthless since it won't get you a job, but what's been forgotten is that the proper study of it will benefit you in any endeavor, and the discredit of the usefulness of the liberal arts in general as human neural-network training is not serving humanity or us as individuals.

Studying nearly any subject in enough depth will lead to more comprehensive and true understanding, and this depth of understanding - along with the capacity for abstract thinking and symbolic manipulation - facilitates generalization. Thus it's not a waste of time to study something that has no apparent direct application or benefit (like philosophy), as all learning based in natural principles acts similarly to training data for neural networks, and improves our own wetware. Expert systems can memorize and regurgitate information, and are able to automate and perform explicitly defined tasks, but are severely limited and unable to respond to novel situations - as a sentient intelligent being, strive to be an [A]GI instead.

Just for fun, here's some code that encodes and decodes the Bacon cipher, written in Ada.

How do these things relate, and how can we put these insights into application? I hope to usefully articulate exactly that in a series of upcoming articles, so read on to find out!