Build Information
Successful build of llmfarm_core with Swift 5.8 for macOS (SPM).
Build Command
env DEVELOPER_DIR=/Applications/Xcode-14.3.1.app xcrun swift build --arch arm64
Build Log
========================================
RunAll
========================================
Builder version: 4.28.8
Interrupt handler set up.
========================================
Checkout
========================================
Clone URL: https://github.com/buhe/llmfarm_core.swift.git
Reference: 0.9.0
Initialized empty Git repository in /Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/.git/
From https://github.com/buhe/llmfarm_core.swift
* tag 0.9.0 -> FETCH_HEAD
HEAD is now at 927d670 "Rename ggml/mc to ggml/ggml-backend.m in Sources/llmfarm_core_cpp/ggml."
Cloned https://github.com/buhe/llmfarm_core.swift.git
Revision (git rev-parse @):
927d670751bc8aebbc5eb845afd36fe1eeef4f5a
SUCCESS checkout https://github.com/buhe/llmfarm_core.swift.git at 0.9.0
========================================
Build
========================================
Selected platform: macosSpm
Swift version: 5.8
Building package at path: $PWD/checkout
https://github.com/buhe/llmfarm_core.swift.git
Running build ...
env DEVELOPER_DIR=/Applications/Xcode-14.3.1.app xcrun swift build --arch arm64
Building for debugging...
[0/32] Copying metal
[1/32] Copying tokenizers
[2/32] Compiling llmfarm_core_cpp resource_bundle_accessor.m
[3/32] Compiling llmfarm_core_cpp package_helper.m
[4/32] Compiling starcoder.mm
[4/32] Compiling rwkv.mm
[6/32] Compiling replit.mm
[7/32] Compiling llmfarm_core_cpp grammar-parser.mm
[8/32] Compiling gptneox.mm
[9/32] Compiling llmfarm_core_cpp gpt_spm.mm
[10/32] Compiling llmfarm_core_cpp gpt_helpers.mm
[11/32] Compiling gpt2.mm
[12/32] Compiling llama_dadbed9.mm
[13/32] Compiling train.mm
[14/32] Compiling k_quants_dadbed9.m
[15/32] Compiling ggml_d925ed-metal.m
[16/32] Compiling ggml_d925ed-alloc.m
[17/32] Compiling ggml_dadbed9.m
[18/32] Compiling ggml_d925ed.m
[19/32] Compiling ggml-metal_dadbed9.m
[20/32] Compiling ggml.m
[21/32] Compiling ggml-quants.m
[22/32] Compiling ggml-metal.m
[23/32] Compiling ggml-alloc_dadbed9.m
[24/32] Compiling ggml-backend.m
[25/32] Compiling ggml-alloc.m
[26/32] Compiling export-lora.mm
[27/32] Compiling llmfarm_core_cpp exception_helper_objc.mm
[28/32] Compiling llmfarm_core_cpp exception_helper.mm
[29/32] Compiling finetune.mm
[30/32] Compiling llama.mm
[31/32] Compiling common.mm
[33/49] Compiling llmfarm_core GPTNeox.swift
<module-includes>:6:9: note: in file included from <module-includes>:6:
#import "/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core_cpp/spm-headers/gpt_spm.h"
^
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLMBase.swift:217:17: warning: variable 'class_name' was never mutated; consider changing to 'let' constant
var class_name = String(describing: self)
~~~ ^
let
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:39:21: warning: initialization of immutable value 'result' was never used; consider replacing with assignment to '_' or removing it
let result = run_finetune(Int32(args.count), &cargs,
~~~~^~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:62:13: warning: variable 'args' was never mutated; consider changing to 'let' constant
var args = ["progr_name", "-m", self.model_base, "-o", self.export_model,
~~~ ^
let
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:71:21: warning: initialization of immutable value 'result' was never used; consider replacing with assignment to '_' or removing it
let result = export_lora_main(Int32(args.count), &cargs,
~~~~^~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa.swift:26:13: warning: variable 'progress_callback_user_data' was never used; consider replacing with '_' or removing it
var progress_callback_user_data:Int32 = 0
^~~~~~~~~~~~~~~~~~~~~~~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa.swift:89:12: warning: 'llama_eval' is deprecated: use llama_decode() instead
if llama_eval(self.context, mutable_inputBatch.mutPtr, Int32(inputBatch.count), min(self.contextParams.context, self.nPast)) != 0 {
^
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa_dadbed9.swift:35:24: warning: 'llama_dadbed9_init_from_file' is deprecated: please use llama_dadbed9_load_model_from_file combined with llama_dadbed9_new_context_with_model instead
self.context = llama_dadbed9_init_from_file(path, params)
^
[34/49] Compiling llmfarm_core LLMBase.swift
<module-includes>:6:9: note: in file included from <module-includes>:6:
#import "/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core_cpp/spm-headers/gpt_spm.h"
^
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLMBase.swift:217:17: warning: variable 'class_name' was never mutated; consider changing to 'let' constant
var class_name = String(describing: self)
~~~ ^
let
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:39:21: warning: initialization of immutable value 'result' was never used; consider replacing with assignment to '_' or removing it
let result = run_finetune(Int32(args.count), &cargs,
~~~~^~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:62:13: warning: variable 'args' was never mutated; consider changing to 'let' constant
var args = ["progr_name", "-m", self.model_base, "-o", self.export_model,
~~~ ^
let
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:71:21: warning: initialization of immutable value 'result' was never used; consider replacing with assignment to '_' or removing it
let result = export_lora_main(Int32(args.count), &cargs,
~~~~^~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa.swift:26:13: warning: variable 'progress_callback_user_data' was never used; consider replacing with '_' or removing it
var progress_callback_user_data:Int32 = 0
^~~~~~~~~~~~~~~~~~~~~~~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa.swift:89:12: warning: 'llama_eval' is deprecated: use llama_decode() instead
if llama_eval(self.context, mutable_inputBatch.mutPtr, Int32(inputBatch.count), min(self.contextParams.context, self.nPast)) != 0 {
^
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa_dadbed9.swift:35:24: warning: 'llama_dadbed9_init_from_file' is deprecated: please use llama_dadbed9_load_model_from_file combined with llama_dadbed9_new_context_with_model instead
self.context = llama_dadbed9_init_from_file(path, params)
^
[35/49] Compiling llmfarm_core LLaMA_FineTune.swift
<module-includes>:6:9: note: in file included from <module-includes>:6:
#import "/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core_cpp/spm-headers/gpt_spm.h"
^
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLMBase.swift:217:17: warning: variable 'class_name' was never mutated; consider changing to 'let' constant
var class_name = String(describing: self)
~~~ ^
let
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:39:21: warning: initialization of immutable value 'result' was never used; consider replacing with assignment to '_' or removing it
let result = run_finetune(Int32(args.count), &cargs,
~~~~^~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:62:13: warning: variable 'args' was never mutated; consider changing to 'let' constant
var args = ["progr_name", "-m", self.model_base, "-o", self.export_model,
~~~ ^
let
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:71:21: warning: initialization of immutable value 'result' was never used; consider replacing with assignment to '_' or removing it
let result = export_lora_main(Int32(args.count), &cargs,
~~~~^~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa.swift:26:13: warning: variable 'progress_callback_user_data' was never used; consider replacing with '_' or removing it
var progress_callback_user_data:Int32 = 0
^~~~~~~~~~~~~~~~~~~~~~~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa.swift:89:12: warning: 'llama_eval' is deprecated: use llama_decode() instead
if llama_eval(self.context, mutable_inputBatch.mutPtr, Int32(inputBatch.count), min(self.contextParams.context, self.nPast)) != 0 {
^
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa_dadbed9.swift:35:24: warning: 'llama_dadbed9_init_from_file' is deprecated: please use llama_dadbed9_load_model_from_file combined with llama_dadbed9_new_context_with_model instead
self.context = llama_dadbed9_init_from_file(path, params)
^
[36/49] Compiling llmfarm_core LLaMa.swift
<module-includes>:6:9: note: in file included from <module-includes>:6:
#import "/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core_cpp/spm-headers/gpt_spm.h"
^
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLMBase.swift:217:17: warning: variable 'class_name' was never mutated; consider changing to 'let' constant
var class_name = String(describing: self)
~~~ ^
let
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:39:21: warning: initialization of immutable value 'result' was never used; consider replacing with assignment to '_' or removing it
let result = run_finetune(Int32(args.count), &cargs,
~~~~^~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:62:13: warning: variable 'args' was never mutated; consider changing to 'let' constant
var args = ["progr_name", "-m", self.model_base, "-o", self.export_model,
~~~ ^
let
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:71:21: warning: initialization of immutable value 'result' was never used; consider replacing with assignment to '_' or removing it
let result = export_lora_main(Int32(args.count), &cargs,
~~~~^~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa.swift:26:13: warning: variable 'progress_callback_user_data' was never used; consider replacing with '_' or removing it
var progress_callback_user_data:Int32 = 0
^~~~~~~~~~~~~~~~~~~~~~~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa.swift:89:12: warning: 'llama_eval' is deprecated: use llama_decode() instead
if llama_eval(self.context, mutable_inputBatch.mutPtr, Int32(inputBatch.count), min(self.contextParams.context, self.nPast)) != 0 {
^
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa_dadbed9.swift:35:24: warning: 'llama_dadbed9_init_from_file' is deprecated: please use llama_dadbed9_load_model_from_file combined with llama_dadbed9_new_context_with_model instead
self.context = llama_dadbed9_init_from_file(path, params)
^
[37/49] Compiling llmfarm_core LLaMa_dadbed9.swift
<module-includes>:6:9: note: in file included from <module-includes>:6:
#import "/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core_cpp/spm-headers/gpt_spm.h"
^
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLMBase.swift:217:17: warning: variable 'class_name' was never mutated; consider changing to 'let' constant
var class_name = String(describing: self)
~~~ ^
let
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:39:21: warning: initialization of immutable value 'result' was never used; consider replacing with assignment to '_' or removing it
let result = run_finetune(Int32(args.count), &cargs,
~~~~^~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:62:13: warning: variable 'args' was never mutated; consider changing to 'let' constant
var args = ["progr_name", "-m", self.model_base, "-o", self.export_model,
~~~ ^
let
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMA_FineTune.swift:71:21: warning: initialization of immutable value 'result' was never used; consider replacing with assignment to '_' or removing it
let result = export_lora_main(Int32(args.count), &cargs,
~~~~^~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa.swift:26:13: warning: variable 'progress_callback_user_data' was never used; consider replacing with '_' or removing it
var progress_callback_user_data:Int32 = 0
^~~~~~~~~~~~~~~~~~~~~~~~~~~
_
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa.swift:89:12: warning: 'llama_eval' is deprecated: use llama_decode() instead
if llama_eval(self.context, mutable_inputBatch.mutPtr, Int32(inputBatch.count), min(self.contextParams.context, self.nPast)) != 0 {
^
/Users/admin/builds/vMd7uqzK/0/finestructure/swiftpackageindex-builder/spi-builder-workspace/Sources/llmfarm_core/LLaMa_dadbed9.swift:35:24: warning: 'llama_dadbed9_init_from_file' is deprecated: please use llama_dadbed9_load_model_from_file combined with llama_dadbed9_new_context_with_model instead
self.context = llama_dadbed9_init_from_file(path, params)
^
[38/54] Compiling llmfarm_core Starcoder.swift
[39/54] Compiling llmfarm_core Tasker.swift
[40/54] Compiling llmfarm_core TokenizeUtils.swift
[41/54] Compiling llmfarm_core Tokenizer.swift
[42/54] Compiling llmfarm_core TokenizerConfig.swift
[43/54] Compiling llmfarm_core AI.swift
[44/54] Compiling llmfarm_core ArrayExt.swift
[45/54] Compiling llmfarm_core ByteEncoder.swift
[46/54] Compiling llmfarm_core Extensions.swift
[47/54] Compiling llmfarm_core FineTune.swift
[48/54] Compiling llmfarm_core GPT2.swift
[49/54] Emitting module llmfarm_core
[50/54] Compiling llmfarm_core ComputeGraph.swift
[51/54] Compiling llmfarm_core Math.swift
[52/54] Compiling llmfarm_core Utils.swift
[53/54] Compiling llmfarm_core RWKV.swift
[54/54] Compiling llmfarm_core Replit.swift
Build complete! (86.80s)
Build complete.
Done.