Age | Commit message (Collapse) | Author | Files | Lines | |
---|---|---|---|---|---|
5 days | Merge pull request #323056 from SomeoneSerge/fix/cudaPackages/outputSpecified | Someone | 1 | -7/+3 | |
cudaPackages: make getOutput work again | |||||
7 days | treewide: cuda: use propagatedBuildInputs, lib.getOutput | Someone Serge | 1 | -7/+3 | |
9 days | llama-cpp: 3091 -> 3260 | Lan Tian | 1 | -20/+19 | |
2024-06-23 | treewide: use cmakeCudaArchitecturesString | Jeremy Schlatter | 1 | -6/+1 | |
2024-06-09 | llama-cpp: 3089 -> 3091 | R. Ryantm | 1 | -2/+2 | |
2024-06-06 | llama-cpp: 3070 -> 3089 | Jono Chang | 1 | -2/+2 | |
Diff: https://github.com/ggerganov/llama.cpp/compare/b3070..b3089 Changelog: https://github.com/ggerganov/llama.cpp/releases/tag/b3089 | |||||
2024-06-03 | llama-cpp: 3015 -> 3070 | R. Ryantm | 1 | -2/+2 | |
2024-06-02 | Merge pull request #315258 from r-ryantm/auto-update/llama-cpp | OTABI Tomoya | 1 | -2/+2 | |
llama-cpp: 2953 -> 3015 | |||||
2024-06-01 | Merge pull request #313525 from maxstrid/llama-cpp-rpc | Peder Bergebakken Sundt | 1 | -6/+9 | |
llama-cpp: Add rpc and remove mpi support | |||||
2024-05-28 | llama-cpp: 2953 -> 3015 | R. Ryantm | 1 | -2/+2 | |
2024-05-24 | llm-ls: 0.5.2 -> 0.5.3 | R. Ryantm | 1 | -3/+3 | |
2024-05-21 | llama-cpp: Add rpc and remove mpi support | Maxwell Henderson | 1 | -6/+9 | |
llama-cpp no longer supports mpi and rpc is the recommended alternative. See: https://github.com/ggerganov/llama.cpp/pull/7395 Signed-off-by: Maxwell Henderson <mxwhenderson@gmail.com> | |||||
2024-05-21 | llama-cpp: 2901 -> 2953 | R. Ryantm | 1 | -2/+2 | |
2024-05-16 | llama-cpp: 2843 -> 2901 | R. Ryantm | 1 | -2/+2 | |
2024-05-11 | llama-cpp: 2781 -> 2843 | R. Ryantm | 1 | -2/+2 | |
2024-05-03 | llama-cpp: 2746 -> 2781 | R. Ryantm | 1 | -2/+2 | |
2024-04-30 | llama-cpp: set build_number/build_commit for version info | Enno Richter | 1 | -2/+11 | |
2024-04-26 | llama-cpp: 2700 -> 2746 | R. Ryantm | 1 | -2/+2 | |
2024-04-25 | llm-ls: 0.4.0 -> 0.5.2 | Roman Zakirzyanov | 1 | -3/+9 | |
2024-04-21 | llama-cpp: 2674 -> 2700 | R. Ryantm | 1 | -2/+2 | |
2024-04-14 | llama-cpp: 2636 -> 2674 | R. Ryantm | 1 | -2/+2 | |
2024-04-09 | llama-cpp: 2589 -> 2636 | R. Ryantm | 1 | -2/+2 | |
2024-04-04 | llama-cpp: 2568 -> 2589 | R. Ryantm | 1 | -2/+2 | |
2024-03-31 | llama-cpp: use pkgs.autoAddDriverRunpath | Jonathan Ringer | 1 | -1/+2 | |
2024-03-28 | llama-cpp: update from b2481 to b2568 | Joseph Stahl | 1 | -2/+2 | |
2024-03-26 | llama-cpp: embed (don't pre-compile) metal shaders | Joseph Stahl | 1 | -1/+4 | |
port of https://github.com/ggerganov/llama.cpp/pull/6118, although compiling shaders with XCode disabled as it requires disabling sandbox (and only works on MacOS anyways) | |||||
2024-03-26 | llama-cpp: rename cuBLAS to CUDA | Joseph Stahl | 1 | -1/+1 | |
Matches change from upstream https://github.com/ggerganov/llama.cpp/commit/280345968dabc00d212d43e31145f5c9961a7604 | |||||
2024-03-25 | llama-cpp: fix blasSupport (#298567) | Christian Kögler | 1 | -3/+5 | |
* llama-cpp: fix blasSupport * llama-cpp: switch from openblas to blas | |||||
2024-03-21 | llama-cpp: 2454 -> 2481 | R. Ryantm | 1 | -2/+2 | |
2024-03-19 | Merge pull request #281576 from yannham/refactor/cuda-setup-hooks-refactor | Someone | 1 | -4/+1 | |
cudaPackages: generalize and refactor setup hooks | |||||
2024-03-18 | llama-cpp: 2424 -> 2454 | R. Ryantm | 1 | -2/+2 | |
2024-03-15 | cudaPackages: generalize and refactor setup hook | Yann Hamdaoui | 1 | -4/+1 | |
This PR refactor CUDA setup hooks, and in particular autoAddOpenGLRunpath and autoAddCudaCompatRunpathHook, that were using a lot of code in common (in fact, I introduced the latter by copy pasting most of the bash script of the former). This is not satisfying for maintenance, as a recent patch showed, because we need to duplicate changes to both hooks. This commit abstract the common part in a single shell script that applies a generic patch action to every elf file in the output. For autoAddOpenGLRunpath the action is just addOpenGLRunpath (now addDriverRunpath), and is few line function for autoAddCudaCompatRunpathHook. Doing so, we also takes the occasion to use the newer addDriverRunpath instead of the previous addOpenGLRunpath, and rename the CUDA hook to reflect that as well. Co-Authored-By: Connor Baker <connor.baker@tweag.io> | |||||
2024-03-14 | llama-cpp: 2382 -> 2424 | R. Ryantm | 1 | -2/+2 | |
2024-03-10 | llama-cpp: 2346 -> 2382 | R. Ryantm | 1 | -2/+2 | |
2024-03-05 | llama-cpp: 2294 -> 2346 | R. Ryantm | 1 | -2/+2 | |
2024-02-28 | llama-cpp: 2249 -> 2294; bring upstream flake | happysalada | 1 | -87/+96 | |
2024-02-23 | llama-cpp: 2212 -> 2249 | R. Ryantm | 1 | -2/+2 | |
2024-02-20 | llama-cpp: 2167 -> 2212 | R. Ryantm | 1 | -2/+2 | |
2024-02-16 | llama-cpp: 2135 -> 2167 | R. Ryantm | 1 | -2/+2 | |
2024-02-13 | llama-cpp: 2105 -> 2135 | R. Ryantm | 1 | -2/+2 | |
2024-02-09 | llama-cpp: 2074 -> 2105 | R. Ryantm | 1 | -2/+2 | |
2024-02-06 | llama-cpp: 2050 -> 2074 | R. Ryantm | 1 | -2/+2 | |
2024-02-02 | llama-cpp: 1892 -> 2050 | R. Ryantm | 1 | -2/+2 | |
2024-01-19 | llama-cpp: 1848->1892; add static build mode | happysalada | 1 | -2/+15 | |
2024-01-12 | llama-cpp: 1742 -> 1848 | Alex Martens | 1 | -14/+2 | |
2024-01-13 | Merge pull request #278120 from r-ryantm/auto-update/llama-cpp | Weijia Wang | 1 | -2/+2 | |
llama-cpp: 1710 -> 1742 | |||||
2024-01-01 | llama-cpp: 1710 -> 1742 | R. Ryantm | 1 | -2/+2 | |
2023-12-31 | llama-cpp: fix cuda support; integrate upstream | happysalada | 1 | -28/+33 | |
2023-12-29 | Merge pull request #277451 from accelbread/llama-cpp-update | Nick Cao | 1 | -2/+2 | |
llama-cpp: 1671 -> 1710 | |||||
2023-12-28 | llama-cpp: 1671 -> 1710 | Archit Gupta | 1 | -2/+2 | |