AI Threat: Please Stop the ASI and AGI Race.

“All that glitters is not gold.”

In a very short span of time, almost everyone is knowingly or unknowingly using AI applications. All of a sudden, it began as a tech revolution, and it started impacting almost all job sectors and almost all aspects of life. 

Before I start, I would like to make my point clear. I am not against narrow AI, but I am against Artificial General Intelligence(AGI) & Artificial Super Intelligence (ASI). The question is, why?

The builders of this are promising a utopia that is never going to exist. Before we build AGI /ASI, we need to ask ourselves whether we are mature enough to handle that.

I believe we are not! The power of this technology is much bigger than the atomic bomb we have created. 

Let’s have a look at the different aspects that made me believe that we are not ready for this:

Environmental Damage

I would like to quote the words of Sam Altman, Founder of OpenAI said in 2015 

AI will probably most likely sort to the end of the world, but in the meantime, there will be great companies created with serious machine learning powers.”

In 2025, he said. “The world needs a lot more processing power … that like tiling datacenters on earth, which I think is what it looks like in the short term and long term..”

So he knew what would happen, but the need of the hour is more power. ( More power to make that happen!? I am not sure.)

Where is this power coming from? We don’t have this much power right now. So this means we have to increase the power plants and solar plants in this world. Let’s look at some recent data so that you understand the scale of things.

Feb 2025: In one day, ChatGPT used enough power to charge 11,066 electric cars or 8,000,000 Phones from 0–100% (39.98 GHh), raising to the power 1/2. If we go this route in one year, ChatGPT alone will use more power than 117 Countries (14.592 TWh), raising to the power 1/3.

According to the Dec 2024 calculations and predictions, by 2028, AI power usage will grow 927% to 2055% (150 to 1300TWh), raising to the power of 14.

Based on the data, it’s calculated that by 2027, annual water usage of AI will be equal to half the UK’s annual water usage.

Now, you might think, how are these AI companies going to get this much power? The fact is, all of these AI companies are investing in power plants and power projects. 

In the last decade, there has not been even one nuclear power plant commissioned in USA, but this year alone, they have signed agreements for five new nuclear power plants — all planned to be commissioned by 2030 and fully backed or owned by these tech giants. They are not stopping there, they are busy making earth look like a mirror by installing solar panels all around the world, and there are those so-called philanthropists who are behind the new power sources that could solve the power crisis. 

This means we are either unaware of what is happening, or kept blind by the authorities and media, or we don’t bother until it shows up on our doorstep.

Data Centers that are coming up like mushrooms after rain

Earlier data centers used to be on the outskirts; they were always away from cities, but that is changing. It’s coming up everywhere, close to almost all neighborhoods. 

We need more computational power and these data centers are popping up everywhere. All they want is more computational power. 

Data center map of USA

Have a look at https://www.datacentermap.com/usa/

If you zoom into the US, you can see it coming up close to the water sources. They don’t mind if it is close to where people live, because all of these data centers need large amounts of water to cool them down. So it doesn’t matter if it’s going to be near a township or in the middle of a city or village. They need computational power and water to cool it down.

Social Impact:

A less talked-about problem. 

According to the utopia painted by these tech giants, in the world of GenAI & ASI is that the people will be freed from economic necessity and could pursue creative, spiritual, and intellectual endeavors, while governments and corporations will have abundant resources to support the general public. Everyone will have government support, and most of us won’t have to go to work.

But

The most profound question is :If you dont have anything to do then , What will you do with the “8hours” of your day?.

“An idle mind is the devil’s workshop.” Psychologists and social scientists warn that prolonged idleness and disconnection could lead to a rise in antisocial and sociopathic behavior. Yet, it seems this possibility is either being overlooked or conveniently ignored by those shaping the narrative of the AI-driven future.

We always had tools, with the invention of each tool, tasks that once required ten workers can now be done by just four.

But this is the first time we are inventing an inventor. Something that can invent new things without our help.

Don’t get me wrong. I am not against AI. I am all in for narrow AI, but I am against General-purpose AI

I believe we and the world is not ready for this. Until we know how to build them safely and keep them under democratic control, we should pause them

As Roman Yampolskiy says, “Now the race of ASI is like the race during the Second World War period, where every country was behind the atom bomb. They knew whoever created that was going to control the world.”

Nearly a century after the Second World War, we stand on the brink of the next monumental discovery, the one that will define who controls the future. But this is far greater than the atomic bomb. 

“Because, an atomic bomb cannot create another bomb by itself, but this new creation can replicate, evolve, and expand beyond human comprehension.Similar to your cat or dog could never grasp the countless ways humans could harm them.”

We may never fully grasp what this technology is truly capable of. But we could decide what the future is by acting now.

What we need is a consortium like how nuclear technology is controlled to make sure this does not get into the wrong hands. Many forums and associations fight for this.

If you could connect to what I said above, then you could sign this petition and show your support :

“If you take the existential risk seriously, as I now do, it might be quite sensible to just stop developing these things any further”. : Geoffrey Hinton, Nobel prize winner and Godfather of AI

Yet Another Reasoning Model: Kimi K2

A Chinese startup has launched yet another open-source AI model. Alibaba-backed Moonshot AI just dropped Kimi K2 Thinking, and “it beats GPT-5 and Claude Sonnet 4.5, period!!!”

This is bigger than Deepseek-R1, yet the media have chosen to remain mute about it.

As they claim : 

  • Kimi-K2-Base: The foundation model, a strong start for researchers and builders who want full control for fine-tuning and custom solutions.
  • Kimi-K2-Instruct: The post-trained model is best for drop-in, general-purpose chat and agentic experiences. It is a reflex-grade model without long thinking.

And the results speak for themselves, backing up their claims:

Performance of Kimi K2 Vs Others

Even more impressive: “This Chinese startup pulled it off despite facing both regulatory restrictions and a lack of access to cutting-edge chipsets.”

I’m certain this is just the beginning and there’s much more to come. Now the world is racing to crack Artificial General Intelligence ( AGI) and Artificial Super Intelligence (ASI). It mirrors the race for atomic energy in the 1940s — whoever succeeds first will hold immense power, capable of shaping (or potentially weaponizing) this technology against others.

 It reinforces the idea that restrictions aren’t the path forward. Just as humanity once came together to control the power of atomic energy, we now need to collaborate, share technology responsibly, and ensure it’s used wisely. Thus preventing it from falling into the wrong hands.

Links:

Kimi K2: Open Agentic Intelligence
Kimi K2 is our latest Mixture-of-Experts model with 32 billion activated parameters and 1 trillion total parameters. It…moonshotai.github.io

Kimi AI – Kimi K2 Thinking is here
Try Kimi, your all-in-one AI assistant – now with K2 Thinking, the best open-source reasoning model. Solves math &…www.kimi.com

Github: https://github.com/MoonshotAI/Kimi-K2

API : https://platform.moonshot.ai/docs/overview

Vector RAG & Graph RAG: A Quick Read on Where to Use What.

When we try to replicate the real-world thinking or mimic the human thought process in any form or shape, it’s important to recognize that the world itself is inherently relational. Humans understand, react, and make decisions by connecting all these — people, emotions, experiences, and contexts. Then why are we forcing to compress all this richness into vectors, thus effectively stripping away the relational semantics that give problems their real meaning?

When we use the vector DB for storing, we are losing the relationships. In Vector DB, each piece of data is converted into a vector embedding a long list of numbers, and these numbers will be saved in the vector database. When you search for something, it uses a similarity search algorithm like cosine similarity, Euclidean distance, etc, to find the most similar vectors. This is ideal for simple question-answer models, recommendation systems, etc, where we do Single-hop queries or similarity search or try to retrieve stats.

But what will you do when you can’t compromise on accuracy and speed, and it is non-negotiable?

This is where the Knowledge Graph or Graph RAG comes in handy. This can do multi-hop travles and each hop can have weightage. In a knowledge graph, the data is stored as facts, entities, and relationships. Where each entry represents explicit knowledge, and relationships between entities are explicitly defined.

This is why this is useful in tasks where reasoning & inference, precision & accuracy, ontology & the relationships matter the most.

The simple difference between Vector RAG and Graph RAG is as below:

Vector RAG
Graph RAG
Difference between Vector & Knowledge Graph

This does

This doesn’t mean you can’t interchange these. Instead, it’s better to match your data structure to your reasoning requirements, not your technology preferences. 

In practice, its better to use a hybrid architecture combining vector, graph, and relational databases to leverage the strengths of each. This approach allows you to retrieve both meaningful (semantic) and precise (factual) information, especially when integrated with an LLM.

Atlas: I believe the SERP Game is Changing

Atlas

Atlas, the OpenAI browser it’s incorporating ChatGPT, or a browser where ChatGPT is the default interface. It can complete the tasks without always redirecting you to a URL. 

Most of you might have thought that all it means is just fewer clicks, but the real story is much bigger than that.

Many think it is just another browser that finds results. It’s not about what it does; instead, it’s about how it does what it does is going to change the SERP game.

ChatGPT itself is a search engine; it runs queries, scrapes the internet based on user queries, summarizes the results, and presents them to you.

With “Atlas,” it’s different, not just Atlas; the Google AI search is also similar to this. These browsers are like agents; they can interact with your JavaScript (JS), which is being used to render your front end.

This means that the SERP game is going to change. If your site has bad JavaScript or is slow, then you won’t appear.

I used a similar example, which is discussed in the last post: OpenAI Apps SDK: The App store of ChatGPT?

When someone searches for

 “Find me the new version of adidas black adizero evo shoes size 10.5 that can be delivered by EOD tomorrow

and searched on Atlas and normal Google Search (Not the Google AI search). When I looked at the networks tab, I found something fundamentally different that sparked the thought about the change that is going to happen in SERP

Atlas Search VS Google Search

This is where “ How it does what it does” makes the difference:” It’s immediately rendering all the JS like how a human sees it.

I am not saying Google Search is not rendering the JS for SERP results. Instead, it means

 “It is done behind the scenesVS. “this is live, like the same rendered content like how humans see.

Let’s look at some of the general differences:

How Atlas Searches 

This is where I believe the SERP game is going to change.

When the user is searching with a specific intent and context, the normal SERP of search engines is going to fail. As tools like Atlas and similar browsers begin to interpret user intent more deeply, the real question becomes: how do you serve that data effectively?

Some quick wins could be:

It’s not just about crawlers and indexes

It used to be a world where website data was optimized for bots and crawlers, just to appear on the SERP. That game is changing. Browser agents like Atlas will now render your site, execute your JavaScript, and extract exactly the information the user is looking for.

As we discussed above, browsers like Atlas or Google AI search or any other AI tool are going to render the JS during the normal page load lifecycle and can start analyzing DOM nodes and network responses during the initial render. This means it sees the same rendered content a human user sees (including content injected by JS).

So make sure you clean up your crappy JS ASAP. Make your site is Agent-readable.

Your Website = A Repository of Information for AI Systems

We are one more step closer to accelerating the theory, which I have written in SEO & AEO: Any Different?

Make your websites ready for agents to read and interpret all relevant data. You have to structure your data properly so that the browser agents can resolve the user queries much faster. This has to go hand in hand with the performance of your frontend.

I feel that in the new world, you should be making sure your site is ready for agents to read. This doesn’t mean the traditional SEO work is not needed, but this is on top of what your were doing or what youre were ignoring.

This further strengthens my belief that :

In the near future, your websites are likely to evolve to become a repository of information for AI systems and search models rather than relying solely on direct user visits.

I’d love to hear your thoughts and ideas on this — feel free to share them in the comments or drop me a message.

Related articles:

SEO & AEO: Any Different?

OpenAI Apps SDK: The App store of ChatGPT?

What is Culture? & A Silent Casualty in the Agentic Era.

Culture, a word we are so proud of. 

Every society and every company is proud of its culture and believes that it is the epitome. Like, some companies with truly questionable cultures also claim they are the epitome of culture.

Have you ever wondered what this culture is and how it forms and shapes?

The most famous definition of culture is by the English anthropologist Edward Burnett Tylor.

“Culture is a complex whole encompassing knowledge, beliefs, art, morals, laws, customs, and other capabilities and habits acquired by individuals as members of society.”

If you examine the cultures, this is something that is unique to humans; no other animals possess this. Rest, all animals have behavioural traits and not cultural traits.

That is one non-debatable topic that only humans possess culture. With the GenAI and humanoid robotic era non-humans are going to enter our workspace or already are in our workspace.

How are businesses and companies that are so proud of their culture going to handle this? How are you going to teach your company culture to GenAI?

Let’s look at the layers of culture:

Layers of Culture

Symbols

The outer layer of culture is Symbols: These are words, gestures, objects, pictures, etc. that carry a particular meaning.

Your company logo is one among them; the way you treat the people, how you communicate those, etc, defines the symbols in your company.

Heros

There are real or fictional persons, in the past or present, who stood for the group or company. They serve as the model behavior.

This could be anyone in our company who stood and fought for the values of your company. You need a heroic story of a group of people or a person, or a project that you can quote and remind people that this is how we used to do this and this is how we will continue evolving. This will be passed on to the generations to come, and that will help cement the culture.

Rituals

These are collective activities; these are social necessities. Sometimes these are not necessary in reaching the objective. Like ways of greeting, showing respect, ceremonies etc.

In your company, this could be your stand-ups, town halls, how you greet each other, how you conduct meetings, when, and how.

Values

This is the core of every culture. These are states of affairs that define Good & Bad, Right & Wrong,etc. Values can only be inferred from the way people act under different circumstances.

If you look at the diagram, culture is like the layers of an onion. When you start peeling away the layers in search of the “real” onion, you won’t find it — because those layers are the onion.

In the same way, without all of these layers, there is no culture.

The visible aspects of culture are its symbols, heroes, and rituals, and these are the elements we can see, hear, or observe in everyday practice. The real meaning behind these things is connected to values, beliefs, and practices. This drives emotions, and this is invisible and intangible.

In the world of Agents, Humanoid robots, and GenAI, it will be difficult to create these emotions. So how are you going to create a culture, or how are you going to keep your culture intact? 

As we discussed earlier, culture is something that is possessed and followed by only humans. In the new world, the leaders of the companies are going to play a more important role than before in determining and making sure the company is staying true to the culture.

Some of the changes that are going to happen are:

1. Companies don’t need a big team

This is a sad truth for some, but the reality is we don’t need big teams to deliver a project anymore. Fewer people to manage, fewer meetings to survive… project managers might actually be smiling at this one 🙂 .

This means that your heroes or heroic stories of projects have to come from this small group. The leaders in these projects have to stay true to the company culture.

2. More Agents:

Agents will be deployed not just on the dev side but also in all aspects of business, from customer service to reporting to operations, to content, and to almost every aspect of business.

You need to inject the symbols and values into these agents. You should be tuning and training your customer service agent to keep the tone of voice the same as the brand does. The content agent should be generating the content, keeping the brand values in place. The operational agent should control the operations, keeping the process, tone, and methods aligned to the brand’s values and cultures.

If you just roll out an agent without defining and fine-tuning this, then its going to kill your culture.

In order to make sure this is in place, you might have to write Evals, track them, and optimize them continuously.

3. Resources in Payrol Vs Resource on Demand

I believe that as agents continue to evolve and mature, companies will start relying on fewer full-time employees. Instead, they’ll increasingly onboard talent on a project or product basis, releasing them once the work is complete. Much of the work previously handled by these roles will soon be automated by intelligent agents. As a result, organizations will view this shift as an opportunity to significantly reduce operational costs.

From a cultural perspective, this shift poses a potential threat. As people join from different backgrounds, collaborate for short engagements, and then move on, maintaining a consistent culture becomes challenging. Therefore, clearly defining and reinforcing your company’s practices and rituals in every project will be crucial.

However, these practices shouldn’t feel rigid or imposed — they should naturally integrate into the way of working within each project. If you fail to define them, individuals from diverse environments will bring their own ways of working, and over time, your organization may lose its unique identity and culture.

3. Mini-Culture for every project

Since every project is going to onboard different people at different points of time. You need to have a mini culture that fits the delivery. This should be a subset of your company culture that is consistent in values but flexible in execution.

For example, if your company defines a development process, it shouldn’t be overly rigid. The process should outline what needs to be done, not dictate how it must be done.

Think of it like:

Instead of saying, “You must eat three meals a day — breakfast, lunch, and dinner — and breakfast must be a three-course meal with specific dishes,” you simply set the expectation that “You must have three healthy meals a day.”

This allows individuals the freedom to choose what and how they eat, based on their context and availability, while still aligning with the core intent of staying healthy.

Now, bringing this back to your project, defining a process like this helps ensure that every team carries out a code review. But the format and who conducts it can change based on the systems, platforms, and technologies involved.

How can you preserve your culture?

All the best!

(Not joking — it’s not going to be easy, especially for those self-proclaimed “great culture” champions.) The real test begins now.

Your leaders and senior folks will have to step up and play a truly crucial role. Maybe take another look at that onion diagram and honestly reflect on what needs to change — the how and the why.

I believe this is something that we have to carefully consider, even when we are racing to keep up the pace. What do you think?

From Clicks to Context: Key Considerations for Embracing Conversational Commerce


Click to Context

After the last two posts, many have reached out to me, and we have had some good discussions. Thank you for all the feedbacks and sessions.

One question that keeps coming back to me in all those meetings was “How will this impact the current retail ecosystems?

That question inspired me to write this piece. In this article, I won’t dive deep into each system & scenarios, but rather provide insights and pointers on what actions to take and which areas are likely to experience change.


With Open AI Apps, the potential scenario we are going to face with e-commerce in the near future is :

More and more companies will start using ChatGPT as another channel for selling their products, which means most of the retailers will be forced to go into that channel. So if you decide to go down that route, your current e-commerce echo systems and architecture are going to have some impacts or changes.

As we transition from click commerce to context commerce, your content becomes the decisive factor — it will either make or break your success.
The conversational customer journey could be like this:

Conversational Product Discovery 

Customer opens ChatGPT and asks: “Hi <Brand:> Find me a black running shoe size 10, under $120.”

ChatGPT will show the shoe size 10 that are less than $120 

Customer: “Show me the blue one.” 

ChatGPT will look at your product feed and search data from RAG and see if you are selling blue shoes, and present that to the customer.

Customer: “Add this to cart, size 10”

ChatGPT will call the backend to create a cart and show that to the customer. 

Customer: “Checkout this”

ChatGPT calls the backend, it calculates the taxes & shipping, returns a hosted payment session URL or a Stripe PaymentIntent (if card entry required).

Customer: Enters the card details, and the purchase is completed.


In order to achieve the above customer journey, we need to do the following things:

Authentications & Cart Merges 

This will be the first touch point of change, and it’s not a biggie, but it’s a change that has to be thought through. 

Similar to how you authenticate the users from the website and apps, you need to map the ChatGPT sessions with the site/app sessions. You will have to manage the guest users, existing users, and existing users with an active cart scenarios etc.

Commerce Orchestrator

Worth thinking about creating an event-based commerce orchestrator with an MCP that dictates how your commerce flow should be. Some of the key responsibilities of this layer could be :

  1. Product feeds to LLMs and other systems
  2. Creating /Merging cart & Checkouts with different channels
  3. Payments
  4. Inventory & Price feeds
  5. Personalization
  6. Updating & retrieving from Knowledge Base 

Product Feed

Most of the retailers should expect a change in the product feed because, 


The traditional product feed from ERPs or your existing PIMs will not work for LLMs. I am not talking about the format of these feeds, instead it’s about the extra information, the information that is traditionally not part of product feeds from these systems( for eg: Including Inventory along with the product feed, Tax as final value based on regions etc)


So you could expect a change in the way you create this data and how you send this data to LLMs.

Knowledge Base 

Simply put, this is the way you can expose your products, data, and services to the RAG. This is a must, and none of the retailers have this right now. I have touched upon this in the article: SEO & AEO: Any Different?

This is a change not just in your tech echo system, but almost every department in the business has to work together to figure out all the questions they have over the period of time, create a strategy, structure it, and publish this as content on the website.

This will call for a change in the way you create content in CMS; you will have to update the product content, and it is going to be a continuous process.

Reducing Hallucinations

New term for you ? Don’t worry, it just means that RAG will read the data, and based on that data, the LLMs might hallucinate and give you a reply that is slightly off. For eg:

While chatting, the customer might ask, “Is this an all-terrain shoes ?”. The LLM will reply saying: “Yea, it’s an all-terrain shoe ” 

Customer: “Is it waterproof?”

LLM: “Yes it is “

The last answer is a hallucination of the LLM, Based on your data, it started thinking that since it is an all-terrain shoe, it should be waterproof. 

To stop these hallucinations, we have to write system prompts like :

“Do not invent product features or availability. If unsure, respond: ‘I can’t confirm that — check this product page’ and provide link/doc reference.”

We call these Evals. This helps LLMs from hallucinations

Don’t worry, there are tools out there that we can easily plug in and do this quite easily. If you are going to use Agent from OpenAI, you can easily input your evals into that.

Payments

The new evolution of AI-enabled commerce is powered by the Agentic Commerce Protocol (ACP), a new, merchant-friendly open standard codeveloped by Stripe and OpenAI. 

Your payment platforms will also soon release this. It’s not rocket science for Service Integrators because most of the work will be done by your gateway. You just have to call it in the right way:

How it works is :

After the customer chooses their preferred payment method, your payment gateway will issue a Shared Payment Token (SPT), a new payment primitive that lets applications like ChatGPT initiate a payment without exposing the buyer’s payment credentials.SPTs are scoped to a specific merchant and cart total. Once issued, ChatGPT passes the token to the merchant via API. The merchant can then process the transaction through your gateway.

Personalization

You can build this along with your commerce orchestrator or if you have a personalization engine, then pass this information, like session history + browsing + purchase history to surface products.

Expose getRecommendations(session_id, product_id) as a tool for ChatGPT to call. Keep your customers’ privacy in mind and only share the IDs and small metadata.


Above is not a comprehensive list of impacted areas, but it covers almost all the basic areas that will have changes. I tried to keep it to the basic impact level so that everyone can build on top of this.

The impact of change will be different for different retailers and is solely based on your current architecture. Your imagination and budget also plays a role in this , we could even think about adding an agentic layer in your architecture and much more.

The great thing about the new agents being rolled out across all LLMs is that development will become much faster. You’ll be able to test creative ideas more easily. I believe that in this new world, imagination will face far fewer limitations due to technological constraints.

What do you think? If there’s a specific area you’d like to discuss, feel free to leave a comment or reach out — I’d love to continue the conversation.

OpenAI Apps SDK: The App store of ChatGPT?

Yesterday ( 6th October 2025), ChatGPT introduced apps you can chat with inside ChatGPT. This means you can launch an app inside ChatGPT. Yeah, you read it right, you can.

“Your customers can chat with your brand through ChatGPT!”

The most important questions are:

How will this work

OpenAI’s Apps SDK is going to enable brands to create custom interactive apps inside ChatGPT, and these apps are going to look native within ChatGPT. These SDKs provide full control over the backend and frontend, allowing brands to customize their offerings and products directly within the chat interface of ChatGPT.

All users have to do is ask for the app name.

Imagine your company named XYZ selling shoes online then :

Your customers can type this inside ChatGPT: XYZ, find me size 10 black shoes of Adidas ”

Boom! There you go.

Sounds exciting and amazing right?

What does this mean for you:

This means that you can deliver services & products directly to your customers who are in the discovery phase inside the ChatGPT. That is, instead of competing for customers’ attention, you can and have to compete to be genuinely helpful to your customers.

This also means that:

 In the near future, your websites are likely to evolve to become a repository of information for AI systems and search models rather than relying solely on direct user visits.”

Should you do it? If so, how to do it?

Currently, ChatGPT has 800 million active users. Which means your brand can reach out to all those users. Not just that, there is the early bird advantag also.

Right now, this is available in preview from Oct 6, 2025, and they will be rolling out more details in the coming months. The brands that build apps during this time will have a significant advantage for sure.

What experiences can you create?

Using the Apps SDK you can create interactive applications inside ChatGPT by using your data, products, and services. You can expose your data to these API’s using MCP servers.

Some quick wins for the e-commerce app, using this SDK

These are a few quick wins that came to mind while listening to this, and I am sure there will be more.

1. Product discovery:

You can make your entire catalogue available in ChatGPT. So the customers can type:

“XYZ, find me the new version of adidas black adizero evo shoes size 10.5”.

The customers don’t have to visit your website or app to browse and find the product instead, they can do it directly from ChatGPT.

2. Transactions:

If they like the product, they can complete the purchase from ChatGPT itself using the new Agentic Commerce protocols, which offer instant checkout from inside ChatGPT.

3. Your Services

You can offer your services directly from the app. Imagine you are providing sneaker customization and fixing. Then the customers can ask for help on how to fix something, or they could even book an appointment for customizing their sneaker using ChatGPT.

How to prepare the APP and data.

This is not like the traditional app, where the UX and UI were driving everything. But when you move to the world of OpenAI apps, you have to forget the traditional way of thinking and reimagine all your interactions from a conversational point of view.

1. Customer Journey:

The traditional customer journey is based on page-based thinking. These traditional journeys, which we are used to won’t work anymore. Instead, look for the most common questions and patterns of asking questions. As I have touched upon in one of my other articles (SEO & AEO: Any Different?), all the customers’ questions became very, very important. This is the foundation of AI native conversation apps.

2. Your Data:

We need clean data, and that has to be accessible dynamically through conversational interfaces. All your product data will become more relevant now. If you have a CMS, then enrich all our data, keeping the conversational interactions in mind, and answer all possible questions customers might ask. This is where I am forced to believe that eventually the websites will become a repository of your data and services.

How will you be able to measure success in this world?

This is something we have to observe and learn in the coming days, but what I believe the key metrics might be:

LOI (Length of Interactions):

Similar to how long your users used to spend on your site/app, you have to measure the length of conversations, satisfaction, whether they got what they were looking for, resulted in a conversion etc.

Problem Resolution

Are you able to resolve the customer’s problem within the conversions? Based on this, optimize your data.

LTI (Life Time Interaction)

Life Time Interaction tracks how customers’ interactions evolve over a period of time. This will help you in gaining trust and eventual conversion.

My Take on this

I believe this is a platform shift similar to how App store changed the mobile app ecosystem. This would require a ground-up rethinking in terms of interactions, product data, service offerings, support data, customer data etc.

It all comes down to how quickly you can adapt — the sooner, the better

What do you think?

SEO & AEO: Any Different?


SEO Vs AEO

Almost 2 years back, when I was working with one of my clients, he asked his SEO team: “How can we get shown up in ChatGPT”? The answers were quite different, and most of them were not quite sure how to approach this.

Fast forward two years, and we started seeing changes in the way ChatGPT presents the results and became a new channel in revenue generation. Seems like many were ignoring or were not aware of it.

Recently, I heard the same question in the meeting room and the responses weren’t much different. During that meeting, I encountered numerous obnoxious comments, such as…

  1. SEO is going to die.

3. You can’t optimize for AEO

4. Someone even gave an obnoxiously big quote to do AEO.

5. and many more

It was all a mess. The big takeaway question I have is, “ Are they different? Is SEO going to die? I will try to simplify things as much as possible, so let’s dive in!


Before we start, let’s first look at the basics and definitions.

AEO → Answer Engine Optimization (some people will call this GEO, but don’t worry, it’s the same. It just means Generative Engine Optimization, but I prefer and believe the first one is a better choice of words because generative engines mean that they can generate images, videos, etc, but our context is around text.)

SEO → Search Engine Optimization

I am not going to explain this here. Because there is already a wealth of information available online about this, you can explore further by reading…

The first question is: Is it worth investing money and effort into doing AEO optimization?

The simple answer is Yes! Because, as per the latest data, the conversion from LLM is 6X better than Google. After the latest update of ChatGPT, the search results are showing up as tiles and clickable links; the conversion is going to go up even further.

So, how can you show up in LLMs?

How do you show up in LLM chats, like ChatGPT, Perplexity, Claude etc.

In order to understand this, we need to look at how this used to happen in the SEO world.

Simply put, in traditional SEO, we used to create landing pages for high-volume keywords. Over a period of time, you will get domain authority, get value for your url, etc.

With AEO, this stays the same, but the Head and Tail are different.

For those who don’t understand head and tail in SEO:

Head or Head Terms: Head terms are phrase that refers to keywords which are broad in nature and have a high volume of monthly searches. For eg: Shoes, running shoes, pet food, pet toys etc..

Tail or Long-Tail: Long-tail keywords are the more specific, and therefore less frequently searched-for, phrases related to a chosen topic and its head terms. for eg: “wide toe box running shoes”, “best dog toys for angry dogs”

What’s different about the Head in the AEO world?

If we simplify things in the AEO world, the head is whatever you do in SEO plus + getting as many mentions in the citations. If you get mentioned in a citation, you will eventually start showing up in the LLMs.

What’s different about the Tail in the AEO world?

The tail is larger in chat because of the follow-up questions. As per the latest study, the average words in an LLM tail is 25 vs 6 in traditional SEO.

So basically, this means certain things are in our control, and that you could optimize.

Before looking into those, let’s try to understand how LLMs are finding information in a simple way.


Learning Models of LLM

At a high level, the LLMs have two core learning modules: the Core Model and the RAG( Retrieval-Augmented Generation).

Leaning Models of LLM

Core Model:

This crawls millions and millions of web pages and trains the model. This is as if we read books and get knowledge about the world.

For example, if you type who invented the electric bulb, then it will automatically predict the next word “Thomas Alva Edison”.

RAG (Retrieval-Augmented Generation):

This is the equivalent of a search. This means that LLMs do the search and then summarize the search in simple English. 

So if we know how and where LLMs look and value more, we could optimize and get our results across to LLMs using the RAG module. Because influencing or changing the core model is not easy and would require a lot of effort, it may not even yield the desired results in the near future or not at all.


Onsite & Offsite Optimisation

So let’s look at what’s in your control and what you can do: There are things you can do onsite and offsite. We are not going to go into detail, as this is intended to give a high-level idea and a real starting point to understand the topic.

Optimization

OnSite Optimization:

Anything that you can do on your site is called onsite, like site content, pages, indexes, etc. These are things that are 100% in your control. 

Remember the point AEO = SEO + Something Extra. 

What are these “ somethings”? The strategy for finding some of those perks is: 

  1. Find the questions people ask, then answer them as much as possible on your site

If you could create a page with all the possible questions users will ask, then you could win this. But the question is, how to find these questions?

Step-1 :

Take the search data, find the keywords from that. Use these keywords and create questions for those using ChatGPT.

Step -2:

Identify all the keywords currently being bid on in paid. Use these keywords and create questions for those using ChatGPT.

Step-2:

Find the keywords your competitors are bidding for. Same as above, create questions for those using ChatGPT.

Step-3:

Get all the questions which is being asked to your customer support teams, store team, delivery teams etc.

Step-4:

Answer all of these and create landing pages for these. If you are an e-commerce company, then you could even answer some of the product-specific questions in the product page itself.

Key takeaway: The more questions you answer, the better

Off-Site Optimization:

There are many things that you could do, but we are trying to cover only the high-level and the basics. You need to strategize for each of these and execute and evaluate them continuously.

1. Who is showing up in Citations?

Citations are sites or content that talk about your product or company, so that you sound more authentic. Sounds like traditional SEO, right? But LLMs have a slightly different way of looking at this(maybe this is another topic for another day). 

Search about yourproduct or content in LLMs and check which citations are appearing — this tells you where you need to have a presence. These are some quick wins

“The basic rule is that the more citations, the better.”

2. Some trusted places LLMs refer to and value for citations

These are some of the places LLMs value the most ( this is as if now, and these examples are generalised for all LLMs. This might & will change in the future)

a. Youtube

b. Viemo

c. Reddit

d. Blogs

e. Credible sites

In the above YouTube and Vimeo videos are easy wins. Another quick, but expensive strategy will be : If you are ready to spend money, then you could get referred to some of the prominent players in their content (citations). This will be expensive but an easy, risk-free winning strategy.

Difference between Search and LLM

In search, it targets thousands of keywords on one page and matches them against the search term. In the case of LLM, instead of keywords, it looks at questions and follow-up questions, then it makes context out of it before showing the results. This means if you provide answers to those questions, you have a chance of appearing in the results.


 Now that we understand the basics, let’s look at what happens when someone searches for “wide toe box running shoes” in an LLM.

First, it looks at its internal knowledge (core module) and sees if it has relevant information. If so, it responds almost instantly. If not, it will deploy the RAG to fetch the results before generating a response. 

This is where your onsite and offsite optimization will come into play, and “voila, you are there!”.


So now, coming to the most important question, which we asked at first: Is SEO going to die? Is SEO different from AEO?

Simple answer to both is: Not really!

We have been hearing this theory that “Google search is going to die”, for a long time. We have heard this many times with the launch of Facebook, Insta, TikTok, etc., but the fact is, Google hasn’t died and is not going to die anytime soon. Instead, all of these have become new channels to businesses.

So LLMs are going to be another channel, probably the most converting channel. ( This is already happening- you might not be able to see this in your analytics. Why you are not seeing this is a whole topic of its own (maybe another topic for another day).

Second part of the question: Is SEO different from AEO?

I would say there’s definitely a lot of overlap, and the basics of SEO remain the same. At the same time, AEO requires some additional efforts, and that will complement SEO.

I tried to keep the explanation as simple as possible to give you a basic understanding of how this works. We have barely scratched the surface, but I believe this gives a solid starting point to build your knowledge further. I would love to hear your thoughts and take on this in the comments…