Skip to content

Core Concepts

Flow

Elemental Flow

An Elemental Flow is a self-contained, autonomous unit designed to perform specific, unique actions.

Components of an Elemental Flow

ComponentDescription
versionDefines the current iteration of the flow, allowing for systematic versioning and change tracking.
privacyDetermines the flow's accessibility. Public flows are available in the marketplace for general use, while private flows remain restricted.
metadataEncompasses essential flow information, including its title, description, and categorizing tags.
promptA set of instructions and guidelines that shape the Large Language Model's behavior and outputs within the flow.
inputsUser-defined variables that serve as parameters for the flow. An Elemental Flow can have any number of inputs.
modelThe Large Language Model (LLM) utilized to execute the flow's core functionality.
embeddingsA dataset of vector representations used to enhance the flow with Retrieval-Augmented Generation (RAG) capabilities.
readmeProvides a comprehensive overview of the flow, detailing its purpose, functionality, and potential applications to guide users in effective utilization.

Elemental Flow Example:

yaml
# version format ex. "0.0.1"
version: "your.version.here"

# Basic metadata for the agent
metadata:
  name: "your-flow-name"
  description: "A brief description of your flow"
  author: "your-username" # This username should match your account username
  tags: [tag1, tag2, tag3, ...] # Tags are keywords used to categorize your flow
  private: false # Access control for your flows (true/false)

# Define the input variables required
inputs:
  input1:
    type: string #Currently we only support String format
    description: "Description of input1"
    required: true
    example: "Example value for input1"
  input2:
    type: string
    description: "Description of input2"
    required: true
    example: "Example value for input2"

# LLM configuration
model:
  provider: "provider-name" # e.g., anthropic, openai, meta, etc.
  name: "model-name"

# Dataset configuration (Optional)
dataset:
  source: "author_name/dataset_name" # Make sure this data set exists

# Prompt template configuration
prompt: |
  Your flow's primary instruction or role...
  You can use {input1} and {input2} placeholders to reference inputs.

# ReadME configuration
readme: |
  Your flow's readme...
  You can use raw text or markdown here.

Compound Flows (Coming Soon)

Curious about combining Elemental Flows for more complex operations? Stay tuned for Compound Flows – where the magic of flow synergy comes to life!

Datasets

Datasets are vector representations of data stored in a specialized vector database. They play a critical role in empowering Elemental Flows with Retrieval-Augmented Generation (RAG) capabilities.

When a user interacts with a flow, the system fetches pertinent context from the datasets in real time. By providing relevant context to the LLM, datasets significantly improve the accuracy and relevance of the generated responses.

Supported File Formats

Mira Flows accepts the following file formats for creating datasets:

File TypeProcessing Method
PDF (.pdf)Textual content is extracted from the PDF document
Markdown (.md)Textual content is extracted from the Markdown document
URLThe specified webpage's content is scraped and extracted for textual information
CSV (.csv)All URLs contained within the CSV are identified, extracted, and their respective web content is scraped.
Text (.txt)Textual content is directly extracted from the plain text file

Create/configure your data set:

python
from mira_sdk import MiraClient

client = MiraClient(config={"API_KEY": "YOUR_API_KEY"})

# Create dataset
client.dataset.create("author/dataset_name", "Optional description")

# Add URL to your data set (URL must be added to an existing dataset)
client.dataset.add_source("author/dataset_name", url="example.com")

# Add file to your data set (file must be added to an existing dataset)
client.dataset.add_source("author/dataset_name", file_path="path/to/my/file.csv")

Link a Dataset with your Flow

Once you have created a dataset, you can associate it with your flow by adding the following configuration in your flow.yaml file.

yaml
# Datasets configuration

dataset:
  source: "author/dataset_name"

LLMs

Large Language Models are advanced AI models trained on vast amounts of text data, capable of understanding and generating human-like text across a wide range of topics and tasks.

In Elemental Flows, LLMs are the primary computational engine, processing inputs and generating outputs based on the given prompts and context. Different LLMs can be chosen for various flows based on their specific requirements, such as language support, domain expertise, or computational efficiency.

List of available LLMs

ProviderModel Name
metallama-3.1-8b-instruct
metallama-3.1-8b-instruct:free
metallama-3.1-70b-instruct:free
metallama-3.1-405b-instruct:free
metallama-3.1-405b-instruct
metallama-3.2-3b-instruct:free