Iris.ai Unveils Advanced AI and LLM Accuracy Infrastructure Offering
VANCOUVER, British Columbia, Dec. 12, 2024 -- Iris.ai, a leading provider of AI engines for deep knowledge and textual understanding, has announced a suite of services designed to enhance AI accuracy for companies building LLM solutions for internal use and external commercial use alike. The RSpace Infrastructure solutions include Retrieval-Augmented Generation (RAG)-as-a-Service, advanced LLM evaluation frameworks, prompt optimization of user queries and high-precision data extraction capabilities to meet the growing demand for more precise and accurate LLM usage.
Factuality and accuracy are increasingly critical concerns for those seeking value from current AI systems. Recent research from IBM highlights that nearly 50% of CEOs are worried about the accuracy of AI systems.
To address this challenge, Iris.ai has opened up its infrastructure to provide tools to secure and evaluate accuracy across varied data sources and domain-specific areas, all within a secure environment. Initially developing these tools for its scientific literature tool suite RSpace, Iris.ai’s Infrastructure solution applies the same scientific principles of high levels of domain-specific knowledge, accuracy, flexibility, and scalability for any organization looking to build LLM solutions.
The offering includes sophisticated tools for Retrieval-Augmented Generation, surpassing traditional vector-based methods with hybrid retrieval approaches, domain-specific embeddings, and intelligent query optimisation. These features ensure nuanced, contextually relevant results tailored to the unique requirements of each query. Iris.ai’s Multi-RAG-as–a-Service enable agent-based planning of which retrieval methods to use based on the user query, in addition to retrieval relevance scoring.
To help organizations choose the most suitable large language models (LLMs) for each of their specific use cases’ needs, Iris.ai has created a comprehensive evaluation framework. This framework employs a broad set of metrics, including a proprietary context-sensitivity metric, auto-configured based on a small sample set to evaluate models at the LLM, RAG, and prompt levels. Businesses can confidently deploy models optimized for their specific operational use cases, and replace them when measurably better ones come along.
The Prompt Optimization system feature of RSpace Infrastructure can enable effective interaction and efficiency when users interact with models. Often overlooked, prompt comprehension and optimization play a decisive role in the success of any LLM. Iris.ai’s innovative solution enhances user queries, transforming them into optimized prompts before they reach the LLM, guaranteeing superior results and elevating the performance of applications.
Other features include advanced data extraction tools that achieve human-level accuracy in processing text, tables and figures. Numerical values, units, and entities are automatically converted to the desired formats, reducing months of manual work to mere minutes.
Anita Schjøll Abildgaard, CEO and Co-founder of Iris.ai, said, "It’s widely recognized that LLMs are prone to hallucinations and occasionally generate factually incorrect outputs. Yet, many organizations are not taking the necessary steps to address this, instead relying on end-users to determine whether outputs are accurate or which models perform “best”, whether on the level of the LLM, the RAG, or the prompts.
“We’ve seen numerous simplistic RAGaaS solutions released this year, most of which lack the advanced features and scalability organizations require. Our infrastructure services bridge this gap. By combining the demonstrably best LLMs for the client’s specific use case with a flexible and secure engineering toolset through the RSpace Infrastructure offering, we are delivering a scalable, adaptable solution that empowers organizations to retrieve, analyze and leverage information with unprecedented precision and bring their LLM usage to the next level.”
Designed for scalability, Iris.ai’s infrastructure offers seamless transitions from pilot projects to enterprise-wide deployments. Its robust security features ensure compliance with stringent privacy standards, making it a trusted solution for industries such as pharmaceuticals, engineering, and financial services. The system supports a wide range of content types, including PDFs, Word documents, Excel files, and presentations, while integrating effortlessly with internal and external datasets.
About Iris.ai
Iris.ai is a world-leading, award-winning AI engine for deep knowledge and textual understanding. Co-founded by serial entrepreneurs Anita Schjøll Abildgaard and Jacobo Elosua, alongside AI researcher Victor Botev, Iris.aioperates with a cross-European team at the forefront of AI for complex unstructured documentation. The company invests heavily in in-house research on NLP and applications of LLMs. Iris.ai has developed cutting-edge infrastructure and tools based on both LLMs and other AI/ML approaches to transform unstructured information into actionable deep knowledge. The RSpace Core application, configurable enterprise offerings, and Infrastructure API are built using a combination of in-house innovations and modified, fine-tuned open-source models. The system is designed to handle the deepest and most complex forms of knowledge and is scalable across industries. Iris.ai delivers its solutions to R&D teams, enterprises dealing with extensive internal research documentation as well as AI builders looking to bring their LLM usage to the next level. Built for science - Engineered for Precision.
Source: Iris.ai