Proving “AI for IT” is Safe, Suitable and Secure

Proving “AI for IT” is Safe, Suitable and Secure

Cover page image from AI for IT validation datasheet

Are you an IT vendor that has AI features in works? Have you deployed AI features only to find that including them raises more questions and concerns than it proposes to resolve?

GenAI is certainly the hot topic du jour, and in the IT Ops market it’s a competitive checkbox. If you haven’t rolled it out yet, you’ve probably got it on your roadmap. As a long-time AI/ML aficionado1 and former enterprise IT product manager, I do have some cynicism about all the hype around GenAI. It is certainly game-changing and can augment, and deskill, many tasks, especially those going beyond the edge of the GPT user’s inherent skills and ability. But incorporating GenAI can add to user confusion, outright hallucinate or mislead users, expose corporate IP, increase solution fragility, and offer sub-optimal recommendations. Hostile users can even attempt to jailbreak or misuse vulnerable GenAI models for nefarious purposes.

The Bad, The Good and The Ugly

And we assess that GenAI is just a bit over-hyped right now. We think GenAI will soon begin to drop into the infamous “trough of disillusionment,” especially for the many non-technical folks who feel GenAI is actually intelligent. In our research we’ve encountered more than a few otherwise skeptical people who really think today’s GenAI is literal “AI” (and it’s orders of magnitude more people than were fooled by Eliza decades ago!).

Cynicism aside, AI/ML solutions can elevate, accelerate, and help optimize almost any IT Management or Ops solution. Especially if the functionality is expertly aligned and applied to the IT use case. Problems can be identified faster (e.g. “root cause”) and solutions found and recommended automatically (if not applied) at scale. (of course, hackers are also leveraging GenAI with deep fakes, social hacking, vulnerability exploitation…)

We are finding in the field that the inclusion of AI features into working IT management and operations solutions can greatly confuse actual users, customers and prospects. Everyone nods at the level 1 pitch that says “hey, there’s AI in the box,” but when comes time to evaluate it and/or make an adoption decision, many IT shops don’t feel they have enough AI experience or depth.

Our AI for IT Validation Service

That’s where we can assist with our Small World Big Data AI for IT Validation. We’ve developed an in-depth 60+ item inspection we conduct with the vendor that covers every important aspect of real-world IT concerns regarding AI feature sets. We also have an equally deep analyst evaluation to help us summarize and score that vendor’s AI implementation.

When we conclude that the IT product or solution meets our standards for AI safety, security and suitability (among many factors), we’ll publish a concise review and opinion, and award our AI for IT Ops official seal of approval. If the solution doesn’t yet measure up, we’ll deliver key findings to the vendor privately and provide opportunity for re-evaluation after remediation.

If interested, we are happy to provide further details about our evaluation process and the AI aspects covered. While this service was mainly developed in response to the rash of GenAI “GPTs” being rapidly deployed across the industry to help identify and differentiate good AI use cases, it can also be used to publicly validate the smart integration and application of any differentiating ML algorithm.

For details or any questions ping us here, on LI, or use this handy form.

  1. Back at MIT in the 80’s, my first AI class was taught by Prof. Joe Wiezenbaum who had first written Eliza before I was born. Interestingly, Eliza is still available inside any EMACS editor. ↩︎