sudo nmcli connection up "10G.T" sudo nmcli connection up "10G.U"
验证:
1 2 3
$ ip link show enp2s0f0 3: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 18:9b:a5:80:5a:05 brd ff:ff:ff:ff:ff:ff
+-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.120 Driver Version: 550.120 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 Tesla P40 Off | 00000000:01:00.0 Off | Off | | N/A 37C P8 11W / 250W | 0MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+
关闭 ECC
Tesla 系列 GPU 默认开启了 ECC(error correcting code, 错误检查和纠正)功能,该功能可以提高数据的正确性,随之而来的是可用内存的减少和性能上的损失。
ECC 内存支持:P4 支持 ECC 校验,开启后会损失一部分显存
开启过后,显存可用为 7611MB,关闭后可用为 8121MB。
通过nvidia-smi | grep Tesla查看前面 GPU 编号:0
1 2 3 4 5
$ nvidia-smi | grep Tesla | 0 Tesla P40 Off | 00000000:01:00.0 Off | Off | -----------------------------------------------------------------------------------------
nvidia-smi 信息中有个非常重要的可能产生误导的问题,列表中的 CUDA Version: 12.4,只是代表该显卡支持的 CUDA 最高版本是 12.4,即该显卡只能安装12.4 以下版本的 CUDA。并不是说已经安装了该版本,下一步根据本步骤的信息,安装 CUDA 12.4。为了避免兼容性,这里安装 12.4.0,没有安装 update 的版本。
CUDA 12.4.0 支持的 Ubuntu 最高版本为 22.04,如果系统为 24.04,则在安装时可能会遇到问题。
问题
1 2 3 4 5 6 7 8 9 10 11 12 13
$ sudo apt-get -y install cuda-toolkit-12-4 Reading package lists... Done Building dependency tree... Done Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation:
The following packages have unmet dependencies: nsight-systems-2023.4.4 : Depends: libtinfo5 but it is not installable E: Unable to correct problems, you have held broken packages.
ii cuda-cccl-12-4 12.4.99-1 amd64 CUDA CCCL ii cuda-command-line-tools-12-4 12.4.0-1 amd64 CUDA command-line tools ii cuda-compiler-12-4 12.4.0-1 amd64 CUDA compiler ii cuda-crt-12-4 12.4.99-1 amd64 CUDA crt ii cuda-cudart-12-4 12.4.99-1 amd64 CUDA Runtime native Libraries ii cuda-cudart-dev-12-4 12.4.99-1 amd64 CUDA Runtime native dev links, headers ii cuda-cuobjdump-12-4 12.4.99-1 amd64 CUDA cuobjdump ii cuda-cupti-12-4 12.4.99-1 amd64 CUDA profiling tools runtime libs. ii cuda-cupti-dev-12-4 12.4.99-1 amd64 CUDA profiling tools interface. ii cuda-cuxxfilt-12-4 12.4.99-1 amd64 CUDA cuxxfilt ii cuda-documentation-12-4 12.4.99-1 amd64 CUDA documentation ii cuda-driver-dev-12-4 12.4.99-1 amd64 CUDA Driver native dev stub library ii cuda-gdb-12-4 12.4.99-1 amd64 CUDA-GDB ii cuda-libraries-12-4 12.4.0-1 amd64 CUDA Libraries 12.4 meta-package ii cuda-libraries-dev-12-4 12.4.0-1 amd64 CUDA Libraries 12.4 development meta-package ii cuda-nsight-12-4 12.4.99-1 amd64 CUDA nsight ii cuda-nsight-compute-12-4 12.4.0-1 amd64 NVIDIA Nsight Compute ii cuda-nsight-systems-12-4 12.4.0-1 amd64 NVIDIA Nsight Systems ii cuda-nvcc-12-4 12.4.99-1 amd64 CUDA nvcc ii cuda-nvdisasm-12-4 12.4.99-1 amd64 CUDA disassembler ii cuda-nvml-dev-12-4 12.4.99-1 amd64 NVML native dev links, headers ii cuda-nvprof-12-4 12.4.99-1 amd64 CUDA Profiler tools ii cuda-nvprune-12-4 12.4.99-1 amd64 CUDA nvprune ii cuda-nvrtc-12-4 12.4.99-1 amd64 NVRTC native runtime libraries ii cuda-nvrtc-dev-12-4 12.4.99-1 amd64 NVRTC native dev links, headers ii cuda-nvtx-12-4 12.4.99-1 amd64 NVIDIA Tools Extension ii cuda-nvvm-12-4 12.4.99-1 amd64 CUDA nvvm ii cuda-nvvp-12-4 12.4.99-1 amd64 CUDA Profiler tools ii cuda-opencl-12-4 12.4.99-1 amd64 CUDA OpenCL native Libraries ii cuda-opencl-dev-12-4 12.4.99-1 amd64 CUDA OpenCL native dev links, headers ii cuda-profiler-api-12-4 12.4.99-1 amd64 CUDA Profiler API ii cuda-repo-ubuntu2204-12-4-local 12.4.0-550.54.14-1 amd64 cuda repository configuration files ii cuda-sanitizer-12-4 12.4.99-1 amd64 CUDA Sanitizer ii cuda-toolkit-12-4 12.4.0-1 amd64 CUDA Toolkit 12.4 meta-package ii cuda-toolkit-12-4-config-common 12.4.99-1 all Common config package for CUDA Toolkit 12.4. ii cuda-toolkit-12-config-common 12.4.99-1 all Common config package for CUDA Toolkit 12. ii cuda-toolkit-config-common 12.4.99-1 all Common config package for CUDA Toolkit. ii cuda-tools-12-4 12.4.0-1 amd64 CUDA Tools meta-package ii cuda-visual-tools-12-4 12.4.0-1 amd64 CUDA visual tools
-m X Use X MB of memory. -m N% Use N% of the available GPU memory. Default is 90% -d Use doubles -tc Try to use Tensor cores -l Lists all GPUs in the system -i N Execute only on GPU N -c FILE Use FILE as compare kernel. Default is compare.ptx -stts T Set timeout threshold to T seconds for using SIGTERM to abort child processes before using SIGKILL. Default is 30 -h Show this help message
Examples: gpu-burn -d 3600 # burns all GPUs with doubles for an hour gpu-burn -m 50% # burns using 50% of the available GPU memory gpu-burn -l # list GPUs gpu-burn -i 2 # burns only GPU of index 2
我们使用 gpu-burn -d 600 来跑 10 分钟测试:
1 2 3 4 5 6
$ ./gpu_burn -d 600 Using compare file: compare.ptx Burning for 600 seconds. GPU 0: Tesla P40 (UUID: GPU-14413e65-6006-ecbe-19fb-de88575d8a3e) Initialized device 0 with 24438 MB of memory (24278 MB available, using 21850 MB of it), using DOUBLES Results are 536870912 bytes each, thus performing 40 iterations
+-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.120 Driver Version: 550.120 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 Tesla P40 Off | 00000000:01:00.0 Off | Off | | N/A 28C P0 53W / 250W | 21665MiB / 24576MiB | 100% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 13338 C ./gpu_burn 21662MiB | +-----------------------------------------------------------------------------------------+
基本上没有超过 30 度, 这就尴尬了, 不晓得 GPU 会不会感冒 🙉.
AI 推理
这里使用最简单的 Ollama 进行快速测试:
1
curl https://ollama.ai/install.sh | sh
1 2 3 4 5 6 7 8
>>> Creating ollama user... >>> Adding ollama user to render group... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service. >>> NVIDIA GPU installed.
#!/bin/bash ######################################################### # Uncomment and change the variables below to your need:# #########################################################
# Install directory without trailing slash #install_dir="/home/$(whoami)"
# Name of the subdirectory #clone_dir="stable-diffusion-webui"
# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention" export COMMANDLINE_ARGS="--api --listen --port 7860 --gradio-auth dong4j:dusj@5282010 --enable-insecure-extension-access"
× Building wheel for tokenizers (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [49 lines of output] running bdist_wheel running build running build_py creating build/lib.linux-x86_64-cpython-312/tokenizers copying py_src/tokenizers/__init__.py -> build/lib.linux-x86_64-cpython-312/tokenizers creating build/lib.linux-x86_64-cpython-312/tokenizers/models copying py_src/tokenizers/models/__init__.py -> build/lib.linux-x86_64-cpython-312/tokenizers/models creating build/lib.linux-x86_64-cpython-312/tokenizers/decoders copying py_src/tokenizers/decoders/__init__.py -> build/lib.linux-x86_64-cpython-312/tokenizers/decoders creating build/lib.linux-x86_64-cpython-312/tokenizers/normalizers copying py_src/tokenizers/normalizers/__init__.py -> build/lib.linux-x86_64-cpython-312/tokenizers/normalizers creating build/lib.linux-x86_64-cpython-312/tokenizers/pre_tokenizers copying py_src/tokenizers/pre_tokenizers/__init__.py -> build/lib.linux-x86_64-cpython-312/tokenizers/pre_tokenizers creating build/lib.linux-x86_64-cpython-312/tokenizers/processors copying py_src/tokenizers/processors/__init__.py -> build/lib.linux-x86_64-cpython-312/tokenizers/processors creating build/lib.linux-x86_64-cpython-312/tokenizers/trainers copying py_src/tokenizers/trainers/__init__.py -> build/lib.linux-x86_64-cpython-312/tokenizers/trainers creating build/lib.linux-x86_64-cpython-312/tokenizers/implementations copying py_src/tokenizers/implementations/sentencepiece_unigram.py -> build/lib.linux-x86_64-cpython-312/tokenizers/implementations copying py_src/tokenizers/implementations/sentencepiece_bpe.py -> build/lib.linux-x86_64-cpython-312/tokenizers/implementations copying py_src/tokenizers/implementations/base_tokenizer.py -> build/lib.linux-x86_64-cpython-312/tokenizers/implementations copying py_src/tokenizers/implementations/char_level_bpe.py -> build/lib.linux-x86_64-cpython-312/tokenizers/implementations copying py_src/tokenizers/implementations/byte_level_bpe.py -> build/lib.linux-x86_64-cpython-312/tokenizers/implementations copying py_src/tokenizers/implementations/bert_wordpiece.py -> build/lib.linux-x86_64-cpython-312/tokenizers/implementations copying py_src/tokenizers/implementations/__init__.py -> build/lib.linux-x86_64-cpython-312/tokenizers/implementations creating build/lib.linux-x86_64-cpython-312/tokenizers/tools copying py_src/tokenizers/tools/__init__.py -> build/lib.linux-x86_64-cpython-312/tokenizers/tools copying py_src/tokenizers/tools/visualizer.py -> build/lib.linux-x86_64-cpython-312/tokenizers/tools copying py_src/tokenizers/__init__.pyi -> build/lib.linux-x86_64-cpython-312/tokenizers copying py_src/tokenizers/models/__init__.pyi -> build/lib.linux-x86_64-cpython-312/tokenizers/models copying py_src/tokenizers/decoders/__init__.pyi -> build/lib.linux-x86_64-cpython-312/tokenizers/decoders copying py_src/tokenizers/normalizers/__init__.pyi -> build/lib.linux-x86_64-cpython-312/tokenizers/normalizers copying py_src/tokenizers/pre_tokenizers/__init__.pyi -> build/lib.linux-x86_64-cpython-312/tokenizers/pre_tokenizers copying py_src/tokenizers/processors/__init__.pyi -> build/lib.linux-x86_64-cpython-312/tokenizers/processors copying py_src/tokenizers/trainers/__init__.pyi -> build/lib.linux-x86_64-cpython-312/tokenizers/trainers copying py_src/tokenizers/tools/visualizer-styles.css -> build/lib.linux-x86_64-cpython-312/tokenizers/tools running build_ext running build_rust error: can't find Rust compiler If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler. To update pip, run: pip install --upgrade pip and then retry package installation. If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for tokenizers ERROR: Failed to build installable wheels for some pyproject.toml based projects (Pillow, tokenizers)
Could not find openssl via pkg-config: pkg-config exited with status code 1 > PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=1 pkg-config --libs --cflags openssl The system library openssl required by crate openssl-sys was not found. The file openssl.pc needs to be installed and the PKG_CONFIG_PATH environment variable must contain its parent directory. The PKG_CONFIG_PATH environment variable is not set. HINT: if you have installed the library, try setting PKG_CONFIG_PATH to the directory containing openssl.pc. cargo:warning=Could not find directory of OpenSSL installation, and this -sys crate cannot proceed without this knowledge. If OpenSSL is installed and this crate had trouble finding it, you can set the OPENSSL_DIR environment variable for the compilation process. See stderr section below for further information. --- stderr Could not find directory of OpenSSL installation, and this -sys crate cannot proceed without this knowledge. If OpenSSL is installed and this crate had trouble finding it, you can set the OPENSSL_DIR environment variable for the compilation process. Make sure you also have the development packages of openssl installed. For example, libssl-dev on Ubuntu or openssl-devel on Fedora. If you're in a situation where you think the directory *should* be found automatically, please open a bug at https://github.com/sfackler/rust-openssl and include information about your system as well as this message. $HOST = x86_64-unknown-linux-gnu $TARGET = x86_64-unknown-linux-gnu openssl-sys = 0.9.106 warning: build failed, waiting for other jobs to finish... error: cargo rustc --lib --message-format=json-render-diagnostics --manifest-path Cargo.toml --release -v --features pyo3/extension-module --crate-type cdylib -- failed with code 101 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for tokenizers ERROR: Failed to build installable wheels for some pyproject.toml based projects (Pillow, tokenizers)
问题处理
1
sudo apt install -y libssl-dev pkg-config
1 2 3 4 5 6 7 8 9 10 11
error: could not compile tokenizers (lib) due to 1 previous error; 3 warnings emitted Caused by: process didn't exit successfully: /home/dong4j/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/rustc --crate-name tokenizers --edition=2018 tokenizers-lib/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg 'feature="cached-path"' --cfg 'feature="clap"' --cfg 'feature="cli"' --cfg 'feature="default"' --cfg 'feature="dirs"' --cfg 'feature="esaxx_fast"' --cfg 'feature="http"' --cfg 'feature="indicatif"' --cfg 'feature="onig"' --cfg 'feature="progressbar"' --cfg 'feature="reqwest"' --check-cfg 'cfg(docsrs,test)' --check-cfg 'cfg(feature, values("cached-path", "clap", "cli", "default", "dirs", "esaxx_fast", "fancy-regex", "http", "indicatif", "onig", "progressbar", "reqwest", "unstable_wasm"))' -C metadata=dcfeed9efd370df2 -C extra-filename=-6d0cff9823c410a3 --out-dir /tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps -C strip=debuginfo -L dependency=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps --extern aho_corasick=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libaho_corasick-806902cb00c4532e.rmeta --extern cached_path=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libcached_path-140c0420b639fee2.rmeta --extern clap=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libclap-f0a196de7c2c2d55.rmeta --extern derive_builder=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libderive_builder-2c35d20c5dcdd1b2.rmeta --extern dirs=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libdirs-2a450a96233c46e8.rmeta --extern esaxx_rs=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libesaxx_rs-14b460a83d36cffb.rmeta --extern getrandom=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libgetrandom-e4e984aab09fca54.rmeta --extern indicatif=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libindicatif-4bd79d992ed7623e.rmeta --extern itertools=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libitertools-d3748b68b90d39fc.rmeta --extern lazy_static=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/liblazy_static-293673978ef0d67b.rmeta --extern log=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/liblog-eeeeba1bbfa2ffb1.rmeta --extern macro_rules_attribute=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libmacro_rules_attribute-d837ae137eb1c6b5.rmeta --extern monostate=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libmonostate-ee413b2bc414638b.rmeta --extern onig=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libonig-fdcb261852f9dd25.rmeta --extern paste=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libpaste-e8c11b814c73abf8.so --extern rand=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/librand-1a565600c8701f83.rmeta --extern rayon=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/librayon-327814f00a0af0eb.rmeta --extern rayon_cond=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/librayon_cond-4c94b6c4149cf439.rmeta --extern regex=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libregex-b19fe03f86732b4c.rmeta --extern regex_syntax=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libregex_syntax-2d5a08e62adc8bc5.rmeta --extern reqwest=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libreqwest-2e0f1b3d46ba2d8c.rmeta --extern serde=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libserde-09bbdc6f8d206673.rmeta --extern serde_json=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libserde_json-f9dd1e99e29af66a.rmeta --extern spm_precompiled=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libspm_precompiled-b00102097bfa2810.rmeta --extern thiserror=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libthiserror-533f4e0aa82c3f92.rmeta --extern unicode_normalization_alignments=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libunicode_normalization_alignments-9913c65b13ec4f88.rmeta --extern unicode_segmentation=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libunicode_segmentation-175194bc41713dcf.rmeta --extern unicode_categories=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/deps/libunicode_categories-fb12b97a420ed313.rmeta -L native=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/build/bzip2-sys-1373a3f19d1e511d/out/lib -L native=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/build/zstd-sys-6d5eafba8c9430e7/out -L native=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/build/esaxx-rs-9028bfd7929bddde/out -L native=/tmp/pip-install-dnunfm0j/tokenizers_40edca6a2fef4b23bfab7d044c44a2a3/target/release/build/onig_sys-9ef1a52efbdf7afc/out (exit status: 1) warning: build failed, waiting for other jobs to finish... error: cargo rustc --lib --message-format=json-render-diagnostics --manifest-path Cargo.toml --release -v --features pyo3/extension-module --crate-type cdylib -- failed with code 101 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for tokenizers ERROR: Failed to build installable wheels for some pyproject.toml based projects (Pillow, tokenizers)
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.