Oracle Introduces In-Database LLMs and Automated Vector Store with HeatWave GenAI

Paxful
Oracle Introduces In-Database LLMs and Automated Vector Store with HeatWave GenAI
Changelly







Oracle has announced the general availability of HeatWave GenAI, which includes the industry’s first in-database large language models (LLMs) and an automated in-database vector store. This development allows customers to build generative AI applications without needing AI expertise, data movement, or additional costs, according to oracle.com.

HeatWave GenAI is now available across all Oracle Cloud regions, Oracle Cloud Infrastructure (OCI) Dedicated Region, and other clouds at no extra cost to HeatWave customers. This new offering provides the ability to create a vector store for enterprise unstructured content with a single SQL command, using built-in embedding models. Users can perform natural language searches in one step using either in-database or external LLMs, enhancing performance and data security while reducing application complexity.

Innovative Features

HeatWave GenAI introduces several innovative features:

In-database LLMs: Simplify the development of generative AI applications at a lower cost by enabling customers to search data, generate or summarize content, and perform retrieval-augmented generation (RAG) with HeatWave Vector Store. Integration with OCI Generative AI service allows access to pre-trained models from leading LLM providers.
Automated In-database Vector Store: Facilitates the use of generative AI with business documents without moving data to a separate vector database. All steps to create a vector store and vector embeddings are automated, enhancing efficiency and ease of use.
Scale-out Vector Processing: Delivers fast semantic search results without loss of accuracy, using a new native VECTOR data type and optimized distance function. This allows semantic queries with standard SQL and combines semantic search with other SQL operators for comprehensive data analysis.
HeatWave Chat: A Visual Code plug-in for MySQL Shell that provides a graphical interface for HeatWave GenAI, enabling natural language or SQL queries. This feature maintains context and allows users to verify the source of generated answers.

Tokenmetrics

Performance Benchmarks

HeatWave GenAI has demonstrated significant performance advantages. Creating a vector store for documents in various formats is up to 23 times faster with HeatWave GenAI and a quarter of the cost compared to using Knowledge base for Amazon Bedrock. Benchmarks show HeatWave GenAI is 30 times faster than Snowflake and costs 25% less, 15 times faster than Databricks and costs 85% less, and 18 times faster than Google BigQuery and costs 60% less. Additionally, HeatWave similarity search processing is up to 80 times faster than Amazon Aurora PostgreSQL with pgvector, providing accurate results with predictable response times.

Customer and Analyst Insights

Vijay Sundhar, CEO of SmarterD, highlighted the ease of leveraging generative AI with HeatWave GenAI, noting the significant reduction in application complexity and costs. Safarath Shafi, CEO of EatEasy, emphasized the differentiation provided by HeatWave’s support for in-database LLMs and vector stores, enabling new capabilities and improved performance for their customers. Analysts, including Holger Mueller of Constellation Research, praised HeatWave’s integrated approach to generative AI, noting its cost-effectiveness and performance advantages.

HeatWave Overview

HeatWave is the only cloud service offering automated and integrated generative AI and machine learning for transactions and lakehouse-scale analytics. It is available on OCI, Amazon Web Services, Microsoft Azure via Oracle Interconnect for Azure, and in customer data centers with OCI Dedicated Region and Oracle Alloy.

Image source: Shutterstock



Source link

Blockcard

Be the first to comment

Leave a Reply

Your email address will not be published.


*