Skip to main content

How to Fix Modelfile YAML Errors During Ollama Custom Model Creation

 When you build local AI models using Ollama, defining custom behavior requires creating a Modelfile. However, developers frequently encounter a hard parsing failure during the build step: command must be one of from, license, template....

This specific error halts your pipeline and prevents the model from compiling. It occurs due to improper multiline string formatting or incorrect YAML indentation when embedding Modelfiles into infrastructure-as-code (IaC) or configuration files.

Here is the technical breakdown of why the Ollama lexer fails, along with the precise fixes required to resolve the syntax errors.

The Root Cause of the Syntax Error

Ollama parses the Modelfile using a strict line-by-line evaluator. The parser expects every new logical line to begin with a reserved instruction keyword (e.g., FROMSYSTEMPARAMETERTEMPLATE).

During custom LLM agent creation, developers inject complex system prompts and few-shot examples into the Modelfile. If a multiline prompt is not properly enclosed in Ollama's specific delimiter ("""), the parser drops out of string-evaluation mode. It then reads the next word of your English prompt (e.g., "You", "Always", or "Respond") and attempts to execute it as a top-level command.

Because "You" is not a valid instruction, the engine throws the generic Ollama Modelfile syntax error. This issue is compounded when the Modelfile is generated or housed inside a YAML file (such as a Kubernetes ConfigMap, Docker Compose file, or LangChain config), where YAML's own multiline string folding rules conflict with Ollama's parser.

The Fix: Correcting Modelfile Multiline Syntax

To resolve the error, you must explicitly define the boundaries of your multiline strings using triple double-quotes (""").

The Broken Implementation

The following syntax will trigger the command must be one of... error because the SYSTEM instruction lacks string delimiters. The parser reads You on line 2 as a new command.

FROM llama3
PARAMETER temperature 0.2
SYSTEM 
You are a senior database administrator.
Always respond with raw SQL code.

The Correct Implementation

Wrap the prompt in """. Ensure the opening """ is on the same line as the SYSTEM command, and the closing """ sits on its own line.

FROM llama3
PARAMETER temperature 0.2
SYSTEM """
You are a senior database administrator.
Always respond with raw SQL code.
"""

Handling Ollama YAML Formatting in Deployment Configs

When automating infrastructure, you often embed the Modelfile into a YAML configuration. Incorrect YAML block scalars will strip formatting, inject unwanted spaces, or break the underlying """ delimiters, causing the same error.

To preserve the literal formatting required by Ollama, use the YAML literal block scalar (|). Do not use the folded scalar (>), as it replaces newlines with spaces and destroys the structural integrity of the Modelfile.

Example: Valid Kubernetes ConfigMap

Here is a production-ready example of how to correctly nest an Ollama Modelfile inside a YAML manifest without triggering syntax collisions.

apiVersion: v1
kind: ConfigMap
metadata:
  name: custom-ollama-model
  namespace: ai-workloads
data:
  Modelfile: |
    FROM mistral:instruct
    PARAMETER num_ctx 4096
    PARAMETER stop "<|im_end|>"
    SYSTEM """
    You are an autonomous debugging agent.
    Analyze the provided stack trace and output a JSON patch.
    Do not include conversational filler.
    """

In this configuration, the | symbol instructs the YAML parser to preserve exact indentation and newlines. The """ tokens are passed safely to the Ollama build engine.

Deep Dive: How the Ollama Parser Evaluates Strings

Understanding the lexer provides better context for any prompt engineering guide. The Ollama parser tokenizes the Modelfile by splitting on whitespace, unless it is actively inside a string literal block.

When the parser reads SYSTEM """, a boolean flag inside the lexer (inString) is set to true. While this flag is active, all subsequent characters—including newlines, quotes, and reserved keywords like FROM—are treated as literal text payload. The parser only resets the flag to false when it encounters a standalone """ token.

If YAML indentation adds trailing whitespace after your closing """ , the regex evaluating the termination token may fail to match. The parser remains stuck in string mode or incorrectly parses the termination block, bleeding into subsequent instructions and throwing the parser off sync.

Common Pitfalls and Edge Cases

1. Escaping Quotes Inside the Prompt

If your system prompt requires the output of triple quotes, you cannot natively escape """ inside the SYSTEM block. You must alter your prompt design to use single quotes, standard double quotes, or markdown code blocks instead.

Avoid:

SYSTEM """
Output exactly """this string""".
"""

Refactor to:

SYSTEM """
Output exactly "this string".
Use standard double quotes in your response.
"""

2. Trailing Spaces on the Delimiter Line

A common, invisible source of the Ollama Modelfile syntax error is trailing whitespace. If your closing delimiter looks like """ (with a trailing space), older versions of the Ollama CLI will fail to recognize it as the termination sequence. Ensure the closing """ is the only content on its line.

3. Intersecting YAML and Go Template Syntax

If you are passing the Modelfile through Helm or another templating engine, Go templates use {{ }} which might conflict with Ollama's TEMPLATE instruction formats.

To prevent Helm from attempting to evaluate your Ollama template variables, wrap the Ollama TEMPLATE block in Helm literal strings:

data:
  Modelfile: |
    FROM llama3
    TEMPLATE """{{`{{ if .System }}<|start_header_id|>system<|end_header_id|>
    {{ .System }}<|eot_id|>{{ end }}`}}"""

Conclusion

The command must be one of from, license, template... error is strictly a lexical parsing failure. By enforcing strict triple-quote (""") boundaries around multiline instructions and properly utilizing the literal block scalar (|) in YAML deployments, you ensure the Ollama engine successfully compiles your custom models. Maintain rigorous formatting in your Modelfiles to build stable, predictable local AI agents.