AIn’t an issue!

AIn’t an issue!
Photo by julien Tromeur / Unsplash

Content, generated by bots utilizing services provided by a seemingly endless list of LLM's (large language models) known by most as AI systems, most famous amongst them probably OpenAI / ChatGPT, ain't an issue –

What is, is whether the reader/listener/viewer is affected by the content or led to believe in an alternate reality! I'm careful here because reality is in the eye of the beholder! You could argue that not all realities are equal and be right but whether they are real is a much more questionable statement.

What separates content generated by living beings from content cooked up in slices of silicium is quickly becoming next to impossible to point to and that predicament may give rise to issues going forward – issues of a magnatude and consequence that mankind has not faced perhaps ever. This is "inventing fire" grade stuff!

Consider LLMs trained to voice your parents beliefs and teachings, and to be able to question them, discuss them, reason with them, and perhaps even make them reevaluate - decades after their passing! Or "machines" teaching you as an undergraduate!? How about "news servers" compiling ample "news" to keep you watching news reels for the better part of your life!? Systems providing you with 'guides' on what politician to vote for based on what you "like" and "believe" in.

Most of what I just listed are in fact not even science fiction - or at least it's quite doable if not in existence yet!

So the question begs: How do mankind somehow make a distinction between content generated by their peers and content generated by what ultimately are machines?

First we'll have to consider whether it is at all necessary for humans to be able to make such distinctions. Provided these 'machines' are driven by altruistic motivations you might argue that their outpourings will only help us survive on a planet increasingly worn out. But altruistic to whom? All of humanity or perhaps more narrowly - a small elite? Altruistic in the "for the greater good" kind of way? Machines with very discrete ends - like generating profits - most likely will not waive at the thought of decimating populations as some kind of collateral damage as long as they (the machines) are not actively killing people but merely consulting them on what to and what not to do - perhaps.

One of mankinds finest qualities, albeit one becoming rarer as of late, is the ability to weight for and against some decision, using their gathered experience, trained and acquired knowledge, ethical and moral codices, and 'gut feelings' – like when my grand dad would posit: Nå' skit kåme te ær' så wed'e et hwådden det wel wæ' (a danish dialect, I know, but basically 'rags to riches' meet 'old versus new money') meaning that you extrapolate trajectories of events and actions based upon paper thin samples from the past but because you carefully pick samples within context your gut-feeling will serve you right more often than not.

If every data bit is with out context and presented as fact! How are we to use our experience on that? Worse - if data bits are pedaled by centuries old honourable bodies firmly situated in the epicenter of humanity, like global newspaper brands and national television networks - then what? Do we question them - especially if what they offer seems to align with our beliefs and common sense?

Photo by David von Diemar / Unsplash

Still with me? Okay, so next up is a brief setting of the stage: We have the Sender and the Message and finally the Receiver. What we are looking for is a way for the receiver to consider the message true and in fact offered by the sender and for the sender to actually having "put their best effort" into construing the message! That's the task like cut to the chase.

First, let's consider the Sender. Building trust usually meant delivering on your promises again and again. Skilled shoe-makers would come in demand when they time after time delivered wooden cloggs in pristine quality every time. By word of mouth people would agree: If you need the best quality in wooden cloggs - go to SomeNamedGuy. So we need some way of identifying the sender - like Yelp or TrustPilot.

Second, consider the Message. Lets go back to the wooden cloggs. Say one customer brings back their pair of cloggs only the next day after ordering them, and pass another customer doing exactly the same only to find that every single man and woman in town got a pair of cloggs in the span of 12 hours! You'd be right to wonder: is there something fishy about this? Can this shoe-maker really build that many pair of cloggs in one night? Compiling a trustworthy message will take time just as much as Avatar was not shot during lunch-break. But - I hear your question! Can big companies not produce more than perhaps a few pages of 'news' in a day? Certainly - but they consist of hundreds, perhaps thousands of employees each producing perhaps 1-2 pages every week. And people trust people/individuals - they do not trust companies. Why should they? You deal with Mr Anderson - not the entire Sales Department. So we need some way of verifying the build process of the message. A cross between versioning software systems like Git and the blockchain making every change immutable and set in stone for all eternity.

Finally, the Receiver. Our hitherto example lends not too well in this case. You see, the receiver may not even be human! Had they been we could ask of them to call the Sender and ask whether this or that piece of information was in fact of their doing or something compiled in a split second by some machine. The Sender could lie, of cause - and run the risk of being exposed and possibly never sell another piece of information. Now here we are in the mid 2020's and surrounded by machines - now what do we do?

We start by tying the build proces to a blockchain of sorts. It does not have to be the Blockchain running the entire crypto markets, any odd blockchain will do - as long as it is trustable. Being decentralized - practically in the eye of the beholder - should do! Providing content providers with a toolset that not only ties their product build process to a verifiable blockchain but allows them to commit work every minute/second/milli-second will be of paramount importance. Tying the build process to a blockchain that at the same time could offer a means to manage the consumption of this content holds some obvious promises by itself - but that is not really of interest in the scope of this post!

Now we are able to offer a ways for the receiver to verify the build process (and possibly the sources being used to compile the content). Now we need for the receiver to be able to verify the authenticity of the sender/creator. Is it indeed that well thought-off journalist freelancing for Le Monde or does it prove in the end that the content was compiled in more like micro seconds by utter transistors?

We add a distillation of DNS with scents of SSL and Certification Authorities effectively providing a body (with possible globally assigned assessor bodies constantly securing identities of creators) to persist and disseminate authenticities of senders, perusable by machines operated by receivers (which again could be other machines or real humans). Senders would have to travel to public offices or branch offices of this body to have their authenticity stored after offering samples of biometrics - fingerprints, sweat, iris scans, more. (See footnote for another angle on this body)

When the creator operates their workbench/editor/toolset - talking, typing, acting, whatever - they (could) allow the toolset to collect biometric data every so often, and hash these data into the same blockchain! In this way the product becomes an extension of the sender. Biometrics may of cause be falsified - and most certainly will be - but such cases will (like the lying Sender before) risk expose and condemnation. Allowing receivers close to senders collecting individual biometric samples will strengthen the entire workflow and system. With only a small sample of "true" biometrics collected by 1-2 fans close to their most-beloved actor, they could chip in to the entire system and build on the trust necessary for senders' authenticity to be verified.

I'll close this post by daring an example of this "tech":

 Melissa & Mustafa skipped their last class that day - word in the streams was that Hallifax had released a new ario that was crazy and they loved everything Hallifax  made; so much that a 1 hour lecture on DNA markers was a no-contest!

Ready, Mel? Mustafa looked at her through the AR lense strapped to his head. She nooded. He pressed PLAY and they both lost their footing even though they were safely tucked in pillows and blankets on the floor - they knew what was in store for them! Wait! Melissa yelled and Mustafa pushed PAUSE.

That's unreal! He couldn't possibly! Melissa nearly shouted still somewhat deafened from the first few second of the ario. She made a gesture and commanded: Show tracking from zero through 10 and the ario started again now stepping forward almost frame by frame meanwhile running verifications in a second window to the right of them and below the scene a line of check marks indicated that every frame was in fact not only genuine but had not received any after production editing except one frame. Melissa was not entirely convinced and continued: Show actors bio, she demanded. The scene froze and all 5 actors in the scene had small annotations labeled to their heads. One was a bot - the wolf - but the rest checked out. Cast credits below and another window above and to the left started listing time spent on set and commits ordered by the cameramen.

I'll be da– Melissa studdered in disbelief. It was that good!

FOOTNOTE:

I mentioned a 'body' above – one to manage authenticities. This body could give way to a new dawn on the much hated "email" of yesterday. We could finally ditch the 'postbox' analogy. The analogy that has done so much harm for decades now!

 In stead of a completely anonymous slot, through which you as the sender is able to push whatever culpritous digital compilations for the poor receiver to have machines lingering anxiously over, spending millions if not billions of CPU cycles and endless watts of precious electricity trying to undo and rectify, you will now have a bullitin board for senders to pin their desire to forward messages (of any construct) to you, and in a few CPU cycles your assigned machine is able to verify the true identity of any sender and if deemed necessary by you go over the commits to the message easily discarding any suspicious looking fragments.