Posted at 07:58 PM in TVM & VTA | Permalink
Let it Rock.
The Terminal is Raspberry Pi's Status
It use 4 CPU to run
I left at 7:00PM , it is almost finished.
But I don't know where it upload what.
Posted at 07:46 PM in TVM & VTA | Permalink
Host Run :
python3 -m tvm.exec.rpc_tracker --host=0.0.0.0 --port=9190
Host Check Client Status:
python3 -m tvm.exec.query_rpc_tracker --host=0.0.0.0 --port=9190
python -m tvm.exec.rpc_server --tracker=[HOST_IP]:9190 --key=rasp3b
And then follow the tutorial
Posted at 11:58 PM in TVM & VTA | Permalink
The solution I found on github, use nnvm to compile
I try to compile use Relay
The Rpi Client code is simple. Load the json, library(.tar) and parameters
Result on Rpi.
I ask it to inference the image of Dolphin.
Posted at 08:03 PM in TVM & VTA | Permalink
Follow the demo --> https://docs.tvm.ai/tutorials/frontend/deploy_model_on_rasp.html#sphx-glr-tutorials-frontend-deploy-model-on-rasp-py
I found some image from other site.
It recognize dog but not dolphin.
Todo : Run on rpi without RPC
Posted at 08:08 PM in TVM & VTA | Permalink
The basic flow is
Load the Model (SSD,CoreML,Onnx,Keras,MXNet,Caffee2,TFLite,Tensorflow)
Compile use Relay with support external Lib or not.
Generate the execution graph.(I think)
TVM will run the graph.
https://docs.tvm.ai/langref/index.html <--- Study Relay is the Key
I need to know, where I can Quantize the weight and reduce the computation graph size.
Posted at 08:37 PM in TVM & VTA | Permalink
Follow the TVM tutorial --> https://docs.tvm.ai/tutorials/frontend/deploy_ssd_gluoncv.html#sphx-glr-tutorials-frontend-deploy-ssd-gluoncv-py
Lesson learn, when you install gluoncv it messup the numpy install.
It will re-install 1.16.x but it seems has problem.
So you need to uninstall 1.16.x and install 1.15.4(x)
Posted at 10:16 PM in TVM & VTA | Permalink
Follow the Instruction, Install from Source
Pay attention to LLVM installation. On TVM0.5 it cannot use over version 6.0.
So on TVM0.6 I still pick LLVM 6.0.1
I try to install NNPACK,
But I don't understand the last Instruction , where is my NNPACK_LIB_PATH ?
I think it is ok to use tvm without it.
I install TVM in VirtualBox VM and also install Jupyter Notebook.
So I can easily use it from my Windows Machine
Posted at 08:52 PM in TVM & VTA | Permalink
Time to swtich gear from FPGA to TVM VTA stuff.
Posted at 12:43 AM in TVM & VTA | Permalink