Frequently Asked Questions (FAQ)
This section addresses common questions related to FlockMTL, including its features, usage, and best practices. If your question is not answered here, feel free to contact us or refer to other sections in the documentation.
General Questions
What is FlockMTL?
FlockMTL is an extension for DuckDB designed to bring semantic analysis capabilities directly into your SQL queries. It deeply integrates capabilities of language models and retrieval-augmented generation using a set of map and reduce functions.
Who can benefit from using FlockMTL?
FlockMTL is designed for developers, data scientists, and businesses that need to leverage semantic analysis alongside traditional SQL operations. It’s especially valuable for use cases like document ranking, content generation, and semantic search.
Features
What are the key features of FlockMTL?
FlockMTL offers:
- Model Management: Configure system-defined and user-defined models.
- Prompt Management: Create and manage reusable text prompts.
- Secret Management: Store and manage API keys securely for supported providers.
- Integration with DuckDB: Leverage DuckDB’s powerful SQL engine alongside semantic capabilities.
- Support for Multiple Providers: Access OpenAI, Azure, and Ollama models for various tasks.
- Local Inference with Ollama: Perform inference on-premises with LLaMA models.
- Scalar / Map Functions: Use functions like
llm_complete
,llm_complete_json
, andllm_filter
for advanced tasks. - Aggregate / Reduce Functions: Perform summarization, ranking, and reordering with functions like
llm_reduce
andllm_rerank
.
Which providers are supported in FlockMTL?
FlockMTL currently supports:
- OpenAI
- Azure
- Ollama
Installation and Setup
How do I install FlockMTL?
You can install FlockMTL sinmply by running the following command:
INSTALL flockmtl FROM community;
LOAD flockmtl;
Are there any prerequisites for using FlockMTL?
Yes, you need:
- DuckDB (latest version recommended).
- API keys for the providers you plan to use (e.g., OpenAI, Azure).
- For local inference with Ollama, ensure the appropriate hardware and dependencies are installed.
Using FlockMTL
How do I manage models in FlockMTL?
You can manage models using SQL commands like:
CREATE MODEL
UPDATE MODEL
DELETE MODEL
Refer to the Model Management section for detailed examples.
How do I set up secrets for providers?
Check the Secrets Management section for a step-by-step guide.
Can I create custom prompts for my tasks?
Yes, you can create and manage custom prompts using the CREATE PROMPT
and UPDATE PROMPT
commands. See the Prompt Management section for more details.
Troubleshooting
What should I do if a query fails?
- Verify that the model and provider are correctly configured.
- Ensure the secret keys for providers are valid and up-to-date.
- Check the input data for any inconsistencies.
How do I debug API-related errors?
- Check your API usage limits and ensure the key is active.
- Use the
GET SECRET
command to confirm the correct key is being used. - Look at error messages from the provider (e.g., OpenAI or Azure).
Why is the response time slow for some queries?
Response times may vary based on:
- The provider (e.g., API latency for OpenAI or Azure).
- The complexity of the query.
- Hardware limitations (for local inference with Ollama).
- The batch size used during inference.
- The model's context window:
- Larger context windows may increase processing time.
- Smaller context windows with large datasets can lead to multiple API calls, increasing latency.
Additional Help
Where can I find more resources on FlockMTL?
How can I contact support?
For technical support, email us.
Got more questions? Let us know, and we’ll be happy to assist!