Print(x.grad.numpy()) # dz/dxprint(y.grad.numpy()) # dz/dy``` Same example in torch X = Tensor.eye(3, requires grad=True)y = Tensor(], requiresgrad=True)z = y.matmul(x).sum()z.backward() ```pythonfrom tinygrad.tensor import Tensor Improving test coverage is great, with reliable non brittle tests.Though if you are adding a feature, you need to include tests. All code golf PRs will be closed, but conceptual cleanups are great.If you don't understand the code you are changing, don't change it!.Bugfixes are the best and always welcome! Like this one.There's a lot of interest in tinygrad lately. Git clone tinygradpython3 -m pip install -e. Eventually, we will build custom hardware for tinygrad, and it will be blindingly fast. We are working on support for the Apple Neural Engine and the Google TPU in the accel/ folder. Support the simple basic ops, and you get SOTA vision models/efficientnet.py and language models/transformer.py models. The sub 1000 line core of it is in tinygrad/ĭue to its extreme simplicity, it aims to be the easiest framework to add new accelerators to, with support for both inference and training. This may not be the best deep learning framework, but it is a deep learning framework. You like pytorch? You like micrograd? You love tinygrad! ❤️įor something in between a pytorch and a karpathy/micrograd
0 Comments
Leave a Reply. |