AI Threat: Please Stop the ASI and AGI Race.

“All that glitters is not gold.”

In a very short span of time, almost everyone is knowingly or unknowingly using AI applications. All of a sudden, it began as a tech revolution, and it started impacting almost all job sectors and almost all aspects of life. 

Before I start, I would like to make my point clear. I am not against narrow AI, but I am against Artificial General Intelligence(AGI) & Artificial Super Intelligence (ASI). The question is, why?

The builders of this are promising a utopia that is never going to exist. Before we build AGI /ASI, we need to ask ourselves whether we are mature enough to handle that.

I believe we are not! The power of this technology is much bigger than the atomic bomb we have created. 

Let’s have a look at the different aspects that made me believe that we are not ready for this:

Environmental Damage

I would like to quote the words of Sam Altman, Founder of OpenAI said in 2015 

AI will probably most likely sort to the end of the world, but in the meantime, there will be great companies created with serious machine learning powers.”

In 2025, he said. “The world needs a lot more processing power … that like tiling datacenters on earth, which I think is what it looks like in the short term and long term..”

So he knew what would happen, but the need of the hour is more power. ( More power to make that happen!? I am not sure.)

Where is this power coming from? We don’t have this much power right now. So this means we have to increase the power plants and solar plants in this world. Let’s look at some recent data so that you understand the scale of things.

Feb 2025: In one day, ChatGPT used enough power to charge 11,066 electric cars or 8,000,000 Phones from 0–100% (39.98 GHh), raising to the power 1/2. If we go this route in one year, ChatGPT alone will use more power than 117 Countries (14.592 TWh), raising to the power 1/3.

According to the Dec 2024 calculations and predictions, by 2028, AI power usage will grow 927% to 2055% (150 to 1300TWh), raising to the power of 14.

Based on the data, it’s calculated that by 2027, annual water usage of AI will be equal to half the UK’s annual water usage.

Now, you might think, how are these AI companies going to get this much power? The fact is, all of these AI companies are investing in power plants and power projects. 

In the last decade, there has not been even one nuclear power plant commissioned in USA, but this year alone, they have signed agreements for five new nuclear power plants — all planned to be commissioned by 2030 and fully backed or owned by these tech giants. They are not stopping there, they are busy making earth look like a mirror by installing solar panels all around the world, and there are those so-called philanthropists who are behind the new power sources that could solve the power crisis. 

This means we are either unaware of what is happening, or kept blind by the authorities and media, or we don’t bother until it shows up on our doorstep.

Data Centers that are coming up like mushrooms after rain

Earlier data centers used to be on the outskirts; they were always away from cities, but that is changing. It’s coming up everywhere, close to almost all neighborhoods. 

We need more computational power and these data centers are popping up everywhere. All they want is more computational power. 

Data center map of USA

Have a look at https://www.datacentermap.com/usa/

If you zoom into the US, you can see it coming up close to the water sources. They don’t mind if it is close to where people live, because all of these data centers need large amounts of water to cool them down. So it doesn’t matter if it’s going to be near a township or in the middle of a city or village. They need computational power and water to cool it down.

Social Impact:

A less talked-about problem. 

According to the utopia painted by these tech giants, in the world of GenAI & ASI is that the people will be freed from economic necessity and could pursue creative, spiritual, and intellectual endeavors, while governments and corporations will have abundant resources to support the general public. Everyone will have government support, and most of us won’t have to go to work.

But

The most profound question is :If you dont have anything to do then , What will you do with the “8hours” of your day?.

“An idle mind is the devil’s workshop.” Psychologists and social scientists warn that prolonged idleness and disconnection could lead to a rise in antisocial and sociopathic behavior. Yet, it seems this possibility is either being overlooked or conveniently ignored by those shaping the narrative of the AI-driven future.

We always had tools, with the invention of each tool, tasks that once required ten workers can now be done by just four.

But this is the first time we are inventing an inventor. Something that can invent new things without our help.

Don’t get me wrong. I am not against AI. I am all in for narrow AI, but I am against General-purpose AI

I believe we and the world is not ready for this. Until we know how to build them safely and keep them under democratic control, we should pause them

As Roman Yampolskiy says, “Now the race of ASI is like the race during the Second World War period, where every country was behind the atom bomb. They knew whoever created that was going to control the world.”

Nearly a century after the Second World War, we stand on the brink of the next monumental discovery, the one that will define who controls the future. But this is far greater than the atomic bomb. 

“Because, an atomic bomb cannot create another bomb by itself, but this new creation can replicate, evolve, and expand beyond human comprehension.Similar to your cat or dog could never grasp the countless ways humans could harm them.”

We may never fully grasp what this technology is truly capable of. But we could decide what the future is by acting now.

What we need is a consortium like how nuclear technology is controlled to make sure this does not get into the wrong hands. Many forums and associations fight for this.

If you could connect to what I said above, then you could sign this petition and show your support :

“If you take the existential risk seriously, as I now do, it might be quite sensible to just stop developing these things any further”. : Geoffrey Hinton, Nobel prize winner and Godfather of AI

Yet Another Reasoning Model: Kimi K2

A Chinese startup has launched yet another open-source AI model. Alibaba-backed Moonshot AI just dropped Kimi K2 Thinking, and “it beats GPT-5 and Claude Sonnet 4.5, period!!!”

This is bigger than Deepseek-R1, yet the media have chosen to remain mute about it.

As they claim : 

  • Kimi-K2-Base: The foundation model, a strong start for researchers and builders who want full control for fine-tuning and custom solutions.
  • Kimi-K2-Instruct: The post-trained model is best for drop-in, general-purpose chat and agentic experiences. It is a reflex-grade model without long thinking.

And the results speak for themselves, backing up their claims:

Performance of Kimi K2 Vs Others

Even more impressive: “This Chinese startup pulled it off despite facing both regulatory restrictions and a lack of access to cutting-edge chipsets.”

I’m certain this is just the beginning and there’s much more to come. Now the world is racing to crack Artificial General Intelligence ( AGI) and Artificial Super Intelligence (ASI). It mirrors the race for atomic energy in the 1940s — whoever succeeds first will hold immense power, capable of shaping (or potentially weaponizing) this technology against others.

 It reinforces the idea that restrictions aren’t the path forward. Just as humanity once came together to control the power of atomic energy, we now need to collaborate, share technology responsibly, and ensure it’s used wisely. Thus preventing it from falling into the wrong hands.

Links:

Kimi K2: Open Agentic Intelligence
Kimi K2 is our latest Mixture-of-Experts model with 32 billion activated parameters and 1 trillion total parameters. It…moonshotai.github.io

Kimi AI – Kimi K2 Thinking is here
Try Kimi, your all-in-one AI assistant – now with K2 Thinking, the best open-source reasoning model. Solves math &…www.kimi.com

Github: https://github.com/MoonshotAI/Kimi-K2

API : https://platform.moonshot.ai/docs/overview

Vector RAG & Graph RAG: A Quick Read on Where to Use What.

When we try to replicate the real-world thinking or mimic the human thought process in any form or shape, it’s important to recognize that the world itself is inherently relational. Humans understand, react, and make decisions by connecting all these — people, emotions, experiences, and contexts. Then why are we forcing to compress all this richness into vectors, thus effectively stripping away the relational semantics that give problems their real meaning?

When we use the vector DB for storing, we are losing the relationships. In Vector DB, each piece of data is converted into a vector embedding a long list of numbers, and these numbers will be saved in the vector database. When you search for something, it uses a similarity search algorithm like cosine similarity, Euclidean distance, etc, to find the most similar vectors. This is ideal for simple question-answer models, recommendation systems, etc, where we do Single-hop queries or similarity search or try to retrieve stats.

But what will you do when you can’t compromise on accuracy and speed, and it is non-negotiable?

This is where the Knowledge Graph or Graph RAG comes in handy. This can do multi-hop travles and each hop can have weightage. In a knowledge graph, the data is stored as facts, entities, and relationships. Where each entry represents explicit knowledge, and relationships between entities are explicitly defined.

This is why this is useful in tasks where reasoning & inference, precision & accuracy, ontology & the relationships matter the most.

The simple difference between Vector RAG and Graph RAG is as below:

Vector RAG
Graph RAG
Difference between Vector & Knowledge Graph

This does

This doesn’t mean you can’t interchange these. Instead, it’s better to match your data structure to your reasoning requirements, not your technology preferences. 

In practice, its better to use a hybrid architecture combining vector, graph, and relational databases to leverage the strengths of each. This approach allows you to retrieve both meaningful (semantic) and precise (factual) information, especially when integrated with an LLM.

From Clicks to Context: Key Considerations for Embracing Conversational Commerce


Click to Context

After the last two posts, many have reached out to me, and we have had some good discussions. Thank you for all the feedbacks and sessions.

One question that keeps coming back to me in all those meetings was “How will this impact the current retail ecosystems?

That question inspired me to write this piece. In this article, I won’t dive deep into each system & scenarios, but rather provide insights and pointers on what actions to take and which areas are likely to experience change.


With Open AI Apps, the potential scenario we are going to face with e-commerce in the near future is :

More and more companies will start using ChatGPT as another channel for selling their products, which means most of the retailers will be forced to go into that channel. So if you decide to go down that route, your current e-commerce echo systems and architecture are going to have some impacts or changes.

As we transition from click commerce to context commerce, your content becomes the decisive factor — it will either make or break your success.
The conversational customer journey could be like this:

Conversational Product Discovery 

Customer opens ChatGPT and asks: “Hi <Brand:> Find me a black running shoe size 10, under $120.”

ChatGPT will show the shoe size 10 that are less than $120 

Customer: “Show me the blue one.” 

ChatGPT will look at your product feed and search data from RAG and see if you are selling blue shoes, and present that to the customer.

Customer: “Add this to cart, size 10”

ChatGPT will call the backend to create a cart and show that to the customer. 

Customer: “Checkout this”

ChatGPT calls the backend, it calculates the taxes & shipping, returns a hosted payment session URL or a Stripe PaymentIntent (if card entry required).

Customer: Enters the card details, and the purchase is completed.


In order to achieve the above customer journey, we need to do the following things:

Authentications & Cart Merges 

This will be the first touch point of change, and it’s not a biggie, but it’s a change that has to be thought through. 

Similar to how you authenticate the users from the website and apps, you need to map the ChatGPT sessions with the site/app sessions. You will have to manage the guest users, existing users, and existing users with an active cart scenarios etc.

Commerce Orchestrator

Worth thinking about creating an event-based commerce orchestrator with an MCP that dictates how your commerce flow should be. Some of the key responsibilities of this layer could be :

  1. Product feeds to LLMs and other systems
  2. Creating /Merging cart & Checkouts with different channels
  3. Payments
  4. Inventory & Price feeds
  5. Personalization
  6. Updating & retrieving from Knowledge Base 

Product Feed

Most of the retailers should expect a change in the product feed because, 


The traditional product feed from ERPs or your existing PIMs will not work for LLMs. I am not talking about the format of these feeds, instead it’s about the extra information, the information that is traditionally not part of product feeds from these systems( for eg: Including Inventory along with the product feed, Tax as final value based on regions etc)


So you could expect a change in the way you create this data and how you send this data to LLMs.

Knowledge Base 

Simply put, this is the way you can expose your products, data, and services to the RAG. This is a must, and none of the retailers have this right now. I have touched upon this in the article: SEO & AEO: Any Different?

This is a change not just in your tech echo system, but almost every department in the business has to work together to figure out all the questions they have over the period of time, create a strategy, structure it, and publish this as content on the website.

This will call for a change in the way you create content in CMS; you will have to update the product content, and it is going to be a continuous process.

Reducing Hallucinations

New term for you ? Don’t worry, it just means that RAG will read the data, and based on that data, the LLMs might hallucinate and give you a reply that is slightly off. For eg:

While chatting, the customer might ask, “Is this an all-terrain shoes ?”. The LLM will reply saying: “Yea, it’s an all-terrain shoe ” 

Customer: “Is it waterproof?”

LLM: “Yes it is “

The last answer is a hallucination of the LLM, Based on your data, it started thinking that since it is an all-terrain shoe, it should be waterproof. 

To stop these hallucinations, we have to write system prompts like :

“Do not invent product features or availability. If unsure, respond: ‘I can’t confirm that — check this product page’ and provide link/doc reference.”

We call these Evals. This helps LLMs from hallucinations

Don’t worry, there are tools out there that we can easily plug in and do this quite easily. If you are going to use Agent from OpenAI, you can easily input your evals into that.

Payments

The new evolution of AI-enabled commerce is powered by the Agentic Commerce Protocol (ACP), a new, merchant-friendly open standard codeveloped by Stripe and OpenAI. 

Your payment platforms will also soon release this. It’s not rocket science for Service Integrators because most of the work will be done by your gateway. You just have to call it in the right way:

How it works is :

After the customer chooses their preferred payment method, your payment gateway will issue a Shared Payment Token (SPT), a new payment primitive that lets applications like ChatGPT initiate a payment without exposing the buyer’s payment credentials.SPTs are scoped to a specific merchant and cart total. Once issued, ChatGPT passes the token to the merchant via API. The merchant can then process the transaction through your gateway.

Personalization

You can build this along with your commerce orchestrator or if you have a personalization engine, then pass this information, like session history + browsing + purchase history to surface products.

Expose getRecommendations(session_id, product_id) as a tool for ChatGPT to call. Keep your customers’ privacy in mind and only share the IDs and small metadata.


Above is not a comprehensive list of impacted areas, but it covers almost all the basic areas that will have changes. I tried to keep it to the basic impact level so that everyone can build on top of this.

The impact of change will be different for different retailers and is solely based on your current architecture. Your imagination and budget also plays a role in this , we could even think about adding an agentic layer in your architecture and much more.

The great thing about the new agents being rolled out across all LLMs is that development will become much faster. You’ll be able to test creative ideas more easily. I believe that in this new world, imagination will face far fewer limitations due to technological constraints.

What do you think? If there’s a specific area you’d like to discuss, feel free to leave a comment or reach out — I’d love to continue the conversation.

OpenAI Apps SDK: The App store of ChatGPT?

Yesterday ( 6th October 2025), ChatGPT introduced apps you can chat with inside ChatGPT. This means you can launch an app inside ChatGPT. Yeah, you read it right, you can.

“Your customers can chat with your brand through ChatGPT!”

The most important questions are:

How will this work

OpenAI’s Apps SDK is going to enable brands to create custom interactive apps inside ChatGPT, and these apps are going to look native within ChatGPT. These SDKs provide full control over the backend and frontend, allowing brands to customize their offerings and products directly within the chat interface of ChatGPT.

All users have to do is ask for the app name.

Imagine your company named XYZ selling shoes online then :

Your customers can type this inside ChatGPT: XYZ, find me size 10 black shoes of Adidas ”

Boom! There you go.

Sounds exciting and amazing right?

What does this mean for you:

This means that you can deliver services & products directly to your customers who are in the discovery phase inside the ChatGPT. That is, instead of competing for customers’ attention, you can and have to compete to be genuinely helpful to your customers.

This also means that:

 In the near future, your websites are likely to evolve to become a repository of information for AI systems and search models rather than relying solely on direct user visits.”

Should you do it? If so, how to do it?

Currently, ChatGPT has 800 million active users. Which means your brand can reach out to all those users. Not just that, there is the early bird advantag also.

Right now, this is available in preview from Oct 6, 2025, and they will be rolling out more details in the coming months. The brands that build apps during this time will have a significant advantage for sure.

What experiences can you create?

Using the Apps SDK you can create interactive applications inside ChatGPT by using your data, products, and services. You can expose your data to these API’s using MCP servers.

Some quick wins for the e-commerce app, using this SDK

These are a few quick wins that came to mind while listening to this, and I am sure there will be more.

1. Product discovery:

You can make your entire catalogue available in ChatGPT. So the customers can type:

“XYZ, find me the new version of adidas black adizero evo shoes size 10.5”.

The customers don’t have to visit your website or app to browse and find the product instead, they can do it directly from ChatGPT.

2. Transactions:

If they like the product, they can complete the purchase from ChatGPT itself using the new Agentic Commerce protocols, which offer instant checkout from inside ChatGPT.

3. Your Services

You can offer your services directly from the app. Imagine you are providing sneaker customization and fixing. Then the customers can ask for help on how to fix something, or they could even book an appointment for customizing their sneaker using ChatGPT.

How to prepare the APP and data.

This is not like the traditional app, where the UX and UI were driving everything. But when you move to the world of OpenAI apps, you have to forget the traditional way of thinking and reimagine all your interactions from a conversational point of view.

1. Customer Journey:

The traditional customer journey is based on page-based thinking. These traditional journeys, which we are used to won’t work anymore. Instead, look for the most common questions and patterns of asking questions. As I have touched upon in one of my other articles (SEO & AEO: Any Different?), all the customers’ questions became very, very important. This is the foundation of AI native conversation apps.

2. Your Data:

We need clean data, and that has to be accessible dynamically through conversational interfaces. All your product data will become more relevant now. If you have a CMS, then enrich all our data, keeping the conversational interactions in mind, and answer all possible questions customers might ask. This is where I am forced to believe that eventually the websites will become a repository of your data and services.

How will you be able to measure success in this world?

This is something we have to observe and learn in the coming days, but what I believe the key metrics might be:

LOI (Length of Interactions):

Similar to how long your users used to spend on your site/app, you have to measure the length of conversations, satisfaction, whether they got what they were looking for, resulted in a conversion etc.

Problem Resolution

Are you able to resolve the customer’s problem within the conversions? Based on this, optimize your data.

LTI (Life Time Interaction)

Life Time Interaction tracks how customers’ interactions evolve over a period of time. This will help you in gaining trust and eventual conversion.

My Take on this

I believe this is a platform shift similar to how App store changed the mobile app ecosystem. This would require a ground-up rethinking in terms of interactions, product data, service offerings, support data, customer data etc.

It all comes down to how quickly you can adapt — the sooner, the better

What do you think?