dev-resources.site
for different kinds of informations.
Using TensorFlow 2.8 on an Apple Silicon arm64 chip
My computer recently had an unfortunate interface with dihydrogen monoxide. To be determined if it will come back to life but it's not looking good. So, I bought a new Macbook which means, the M3 chip (arm64). I had a nice experience using the M1 from my previous job so I was looking forward to it.
Of course 😩 the x86
vs arm
architecture issues started immediately when I tried using TensorFlow.
Here's how I fixed it. The pull request: deepcell-imaging#229
DeepCell uses TF 2.8 so that's what we have to use. Unfortunately the 2.8.4 package doesn't come with ARM binaries. Incidentally TF 2.16.1 does have arm64 binaries ... but I can't use it here 😑
Apple has some documentation for installing TensorFlow and the "metal" plugin. In particular,
For TensorFlow version 2.12 or earlier:
python -m pip install tensorflow-macos
In our case we need tensorflow-macos==2.8.0 as found in the tensorflow-macos release history. Unfortunately the files list reveals there's no Python 3.10 distribution so I need to downgrade to Python 3.9.
As for tensorflow-metal the package documentation says we need v0.4.0.
I packaged a new requirements file for Mac arm64 users:
$ cat requirements-mac-arm64.txt
tensorflow-macos==2.8.0
tensorflow-metal==0.4.0
Then you install the mac requirements:
pip install -r requirements-mac-arm64.txt
Of course, the shenanigans don't stop! Running pip install -r requirements.txt
fails to install DeepCell, because it depends on tensorflow
– not tensorflow-macos
(which provides the same Python module tensorflow
).
So I ran it this way to skip dependencies after installing the ones we could:
pip install -r requirements-mac-arm64.txt
pip install -r requirements.txt
pip install -r requirements.txt --no-deps
Then I got an interesting protobuf failure.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
Quick fix: grab the most recent 3.20.x protobuf version, 3.20.3.
Apple provides a test script:
import tensorflow as tf
cifar = tf.keras.datasets.cifar100
(x_train, y_train), (x_test, y_test) = cifar.load_data()
model = tf.keras.applications.ResNet50(
include_top=True,
weights=None,
input_shape=(32, 32, 3),
classes=100,)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False)
model.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"])
model.fit(x_train, y_train, epochs=5, batch_size=64)
One 180 MB model download later … we're golden.
2024-06-05 23:10:30.794862: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
Just to confirm, let's check Activity Monitor – and yes, it's using the gpu. 🎉 😤
Phew. Well, hopefully this is a one-time thing. Most of our development is cloud which is x86, the more common binary format.
Until our next adventure with binaries ✌
Featured ones: