AI for Business and the Business of AI

Modern LLM-based AI has been the techno rage over the past 24-36 months (Crypto, blockchain, DeepLearning etc being in the rearview mirror). OpenAI with ChatGPT, Google with Gemini, Anthropic with Claude, Perplexity with its Computer and a wide variety of open-source LLMs are pushing the boundaries of what an AI system can do. Consumers are adopting AI via the different chat interfaces provided by the above mentioned foundational model providers. They are using AI – text, video, audio in a wide variety of “personal” workflows – such as Google’s Notebook LLM, in Google’s docs, slides , in Microsoft’s products and as part of everyday online search wherein the AI mode provides a summary of top search results. However, adoption of LLM-based AI systems in enterprise settings is on a much more moderate growth curve. This essay explores the following questions – a) Is LLM-based AI ready for the rigors of everyday business use-cases? and b) How do the advances in AI technology drive business adoption – Is it a customer pull or vendor push?

AI Readiness for Enterprises and Enterprise Readiness for AI

Basic LLM-based AI use-cases that can support a range of enterprise workflows are based on the following “elemental” tasks:
1. Text processing – summarization of documents, content extraction, unstructured to structured data extraction.
2. Document generation – Email drafting, analysis of documents, spreadsheet processing, generating presentations, preparing reports via Retrieval Aided Generation and related technologies.
3. Dialog Management – Chat and audio conversation management, message summarization, text-to-speech and speech-to-text processing in a variety of languages.
4. Image/Video generation – Synthesizing video streams, image sequences, image extraction.

The quality of the above “elemental” tasks varies across different foundational model providers. Average quality depending on the specific elemental task is around 60-70% based on different aspects. These models are improving in every iteration. The above elemental tasks focus on improving/aiding “perceptual processing” tasks. Any kind of “in-depth” reasoning tasks (We do not apriori know how deep is deep in a given domain) to address a business use-case requires “reasoning” support in an LLM. The notion of “reasoning” in the context of an LLM is much different from the everyday common-sensical notion of “reasoning”. “Reasoning” in current LLM parlance means doing things in stages with an intermediary “pauses” between stages. This is a major R&D research area and much remains to be done. Stable/repeatable/High quality “reasoning” performance is still limited from foundational LLMs.

Enterprise AI adoption is also influenced by a variety of structural and operational constraints in a business (assuming financial resources are not limited) such as:
1) Existing IT applications and infra – including SaaS apps across functional areas and specific areas of expertise such as Apps for finance, Apps for marketing, Apps for field service etc. These function in silos with minimal interactions – data, process or otherwise.
2) Existing enterprise IT architecture including data and workflows – How is the data organized? How are the different business processed supported and unified? Digital native companies are much better organized here whereas older brick and mortar companies carry large legacy systems, data and processes in silos and more – like old Ivy growing on historical buildings.
3) Ongoing data and process migrations across the board – Systems exist in silos and get upgraded in silos. Many pre-AI plans continue to be executed – how does one assimilate AI on the go or redo?
4) Finally, the expertise of in-house information technology teams that understand AI technology, its strengths and limits, the future “needs and wants ” of the business and can envision a “transformation” plan is a major factor.

Given the above-mentioned capabilities of AI systems and the organizational factors – How can AI be used for business – increasing the top-line or “reducing the bottom line”? Firstly, we discuss bottom line opportunities and then the top-line opportunities. Bottom-line improvement use-cases include:
1. Re-imagining all data and document movement processes across the workforce. Firstly – the above basic “perceptual processing” LLM capabilities suggest that enterprises should target deploying AI in all workflows dealing with near-standard documents of different types across functional areas – internal and external. Invoices, expense reports, HR documents and more are good targets. Both document analysis and document generation use-cases may be re-imagined. Existing systems providing both – data and processes – may be integrated into the re-imagined workflow. This is the biggest domain for consideration in a business. The latest AI-agentic orchestration platforms aim to address this need.
2. Re-doing all “touch-points” with customers and vendors – hiring enough customer support folks that provide high-quality service may not be possible. Using audio-agents, enabling intelligent FAQs and search interfaces to enable quality self-serve experiences, handling all “status” queries – where is my contract? What is the status of my payment? etc. can improve overall business process performance and reduce auxiliary costs.
3. AI/LLM tools can be used for powering a wide-variety of internal and external research activities and also power the development of internal Knowledge Management systems. Capturing internal process information, external market and competition information and combining them to power business strategy is a major need. Building these tools to enable internal experts in every domain will reduce overall information gathering, synthesis and analysis costs.

Top-line improvement use-cases primarily address the following question – where can I embed AI to make my service/product more useful for the customer? It can be embedded in the “product development process” or in the product/offering itself. Applications include –
1. Embedding AI/adaptivity into the environment and contexts in the product itself – especially both software and hardware itself. For example recommender systems, robotics use-cases etc. Google worksuite products are a good example. Many types of ambient intelligence systems are being envisioned.
2. Software coding – Recent advances in AI-enabled coding suggest that software development can be seen more as a growth enablement tool rather than a cost-sink (as it is being positioned now). One can perform product development iterations much faster and explore a larger design space of product options. Much remains to be done here as we are in the early days.
3. Media management – this includes all kinds of marketing and advertising efforts – including content preparation, product marketing and more.

As “intelligence” gets embedded into the environment, products and services – one can imagine new products being developed. For example, embedding basic sensor intelligence into home systems such as A/Cs, lighting, security, cameras etc. has led to the development of integrated home monitoring systems. More intelligence either gets centralized or pushed to the edge depending upon the use-cases. Infrastructure such as networks, power management also get better.

Overall, AI as it exists today, is not “fully” ready for enterprise deployment. Readiness varies with use-case, business maturity and more. Much remains to be done such as:
1. Experimental phase to figure out where AI can be applied as mentioned above and where are the potential benefits seen. What things need to be modified or invented new to get the AI deployed. Does the AI impact your core competencies or are you going to spend your resources on improving your support processes. What are the last mile issues once you unpack the AI-in-the-box? How long will it take to get to the quality and reliability in your systems?
2. Understanding organizational impact and associated changes that may be required. Though deploying AI suggests reducing the work-force, we believe managing AI systems would require a higher-calibre workforce that can maintain these AI systems. It is yet to be figured out how such a transition would happen.
3. Executing the AI transition at scale to reap the benefits. This has implications on understanding the core-competency of the business versus changing auxiliary processes.

Over the past 24 months, many companies have executed pilot projects as they attempt to understand how to utilize AI – what is the hype? What is the reality? Many enterprise infra issues still need to be sorted out, such as:
1) Enterprise scale requirements such as compliance, security, governance of AI systems. What are the policies etc. What happens when failure occurs in mission-critical contexts? How do I run propietary LLMs handling proprietary data? What existing systems do I keep and what do I retire?
2) The utilization of “structured data” integrated with AI systems is still being figured out – RAGs, Knowledge graphs, tool calling, SLMs etc – many ways of doing the same thing. Use of Ontologies are another big trend, a side effect of Palantir’s story telling. Major investments of the past such as databases and BPM systems need to be reused properly. Ongoing data science projects and big data systems also need to be leveraged effectively.
3) The AI system build out lifecycle is still being figured out – when do I fine tune? How do I know I have converged to a viable solution? What are the guardrails? How long will it take to get to a solution? The last mile in many projects is a big issue. How do I handle all variations required in my business? How do I do evals that I can rely on? What is the success criteria? Project management of AI system build and deployment is a complex task – It is product and service management combined. By the time I finish the project, the LLM has evolved. What do I do now? How frequently should I update my base LLMs? Do I build expertise in-house or get consultants to do the project or a hybrid? The uncertainity inherent in the base technology and a lack of “clear”/specific utility of these systems (once we discard the hype) is a major bottleneck. We hope things will improve on all these fronts to drive AI adoption.

4) One major issue is to envision the “human factors” – what “human” skills/expertise does one need as one imagines the business five years from now or a decade from now? What if the whole AI effort fails? What do I do in terms of risk management? (Situation is similar to offshoring all basic manufacturing in the US in the 90s) How do I recover/redo? How much AI is research? and how much AI is something stable I can build on?
As enterprises muddle through answering the above questions, the AI industry chugs ahead trying to be a one-stop shop for everything. Let us get a handle on some of the key trends as billions of dollars pour in as investments to make AI a reality reality.

The Business of AI

Current trends in the business of AI can viewed along these few key topics –

Investments in AI

Investments in AI startups at different stages of growth and existing big-tech companies is at an all-time high. AI infrastructure companies – chip makers, GPU makers, memory makers, AI hyperscalers who provide the infrastructure, AI foundation model makers, AI enabled coding startups and finally AI “application” companies in different domains are all attracting large investments to power R&D and drive adoption. LLM-driven AI is also enabling robotic hardware companies attract investments. All technology related companies are making investments in internal AI projects or partnering with vendors who offer a variety of AI services. The implicit boundary line between “horizontal AI stack” providers and vertical domain-specific “AI” application developers has already blurred. The big AI foundational model companies are investing in building vertical applications in specific functional areas and also creating consulting arms to engage with customers and deploy the AI stack. The bets are large and hope for inordinate outcomes.

AI Technology Stack and AI Engineering

Foundational AI models are evolving rapidly with new releases every quarter. Both open-source and closed-source LLMs are in an arms race as new LLM architectures are being released. Training data creation is a big business across the board in the realworld. LLM’s are being extend to build “world models” to make these models more intelligent. A wide set of open-source tooling is available to build, deploy, debug and maintain LLM-based systems. AI Engineering skills across the full AI lifecycle is in big demand.

AI Usecases

AI is being deployed in nearly all the sectors – education, health, wealth management and more. It is being embedded in scenarios listed earlier in this essay with different levels of success. Multiple iterations are underway as the technology workforce adapts to this new stack. Business analysts and managers continue to evaluate the impact of the technology with widely varying conclusions – suggesting the overall transformation is still a work in progress. It is early days.

AI ecosystem evolution

Doing something with AI or the basic need for AI literacy has gone mainstream globally. The Turing and Nobel awards for AI pioneers has made being in AI an aspirational target. AI marketing budgets continue to rise to power the complete AI adoption lifecycle. AI systems are slowly confronting realworld scenarios of AI bias, AI ethics, AI regulation, AI safety etc as the larger public engages with AI systems. Much remains to be figured out here as things continue to evolve. Use of AI tools for “personal professional” growth is evolving as folks continue to experiment. Impact of AI on the labor workforce in various fields is yet to be understood. The role of law and regulation in AI adoption globally is still in its infancy.

Concluding Remarks

Given the state of the AI technology eco-system, deploying AI in all its shapes and forms is going to be the next decade or more of technology evolution. Should enterprises get in now or get in later (once something reliable is in play) is an open question? Should an enterprise use AI adoption as an head-fake to drive other transformational decisions is another open question?

In conclusion, few things regarding the future of Enterprise AI are quite clear –
1) AI is here to stay – It is “performant” enough to elicit genuine interest from a global audience across all stratas of society, consumers and enterprises alike. Will it generate something of real long term value as a stable building block for enterprises – is yet to be seen. “Enterprise search” is a 40 year old problem yet to solved satisfactorily – just for the record.
2) Much remains to be done in each sector of industry – the process of exploration, exploitation and consolidation has just started with AI as the “spearhead” (using terms that are quite contemporaneus!) Being an AI luddite or AI fanatic does not help – but finding the middle path will require some serious engineering chops.

and

3) AI adoption will disintermediate extant ways of every day life across sectors – especially those that rely on “past knowledge”, “knowledge curation”, “knowledge gatekeepers”, “knowledge disseminators” and more.
Hopefully as I re-visit this topic in an year from now, we would have made serious headway on many of the above topics and trends.

Leave a Reply

Your email address will not be published. Required fields are marked *