Fine-Tunning
Last updated
Last updated
Apart from the default model offered with all of the CodeMaker AI subscription plans we allow the users to fine-tune their dedicated models on their code base.
Fine-tuning is done against the entire code base, which may require processing a couple of million tokens; because of that, model fine-tuning is priced independently. The fine-tuning is charged for tokens used during the model training. There are no additional charges for hosting or using the model. The cost of fine-tuning will be invoiced to your account.
Unit | Price |
---|---|
Fine-Tuning is an automated process, aiming to make it as accessible to our users as possible. It only requires setting up a Repository in the CodeMaker AI Portal and creating a new Fine-Tunned model.
Depending on the codebase size, fine-tuning may be a slow process that requires multiple hours to complete.
The number of epochs parameter determines how many times each instance of code will be used for training, a larger number of epochs causes the model to fit the data better. Too large epoch value will cause it to overfit. Based on our own research, the epoch value between 3 to 5 is probably going to get the best result. Increasing the epoch parameter will cause a longer training duration and larger usage of tokens.
The optional path parameter can specify the subdirectories for creating training data sets. When omitted, all files in the repository will be used for fine-tuning. The path can accept glob patterns such as src/**
or lib/**
. The pattern is resolved relatively to the repository root directory.
Category | Limit |
---|---|
1 mln tokens
$15
Models
1
Maximum Repository Size
1 GB
Maximum Source Code Size
100 MB