Effective TensorFlow 2.0 There are multiple changes in TensorFlow 2.0 to make TensorFlow users more productive. TensorFlow 2.0 removes redundantAPIs, makesAPIs more consistent (Unified RNNs, Unified Optimizers), and better integrates with thePython runtime with Eager execution. Many RFCs have explained the changes that havegone into making TensorFlow 2.0. Thisguide presents a vision for what
Quantization-aware training Quantization-aware model training ensures that the forward pass matches precision for both training and inference. There are two aspects to this: Operator fusion at inference time are accurately modeled at training time. Quantization effects at inference are modeled at training time. For efficient inference, TensorFlow combines batch normalization with the preceding con
Note: XLA is still under development. Some use cases will not see improvements in speed or decreased memoryusage. XLA (AcceleratedLinear Algebra) is adomain-specific compiler forlinear algebra that optimizes TensorFlow computations. The results are improvements in speed, memoryusage, and portability on server and mobile platforms. Initially, most users will not see large benefits from XLA, bu
If your device is not yet supported,it may not be too hard to add support. You can learn about that process here. We're looking forward to getting your help expanding this table! Getting Started with Portable Reference Code If you don't have a particular microcontroller platform in mind yet, orjust want to try out the code before beginning porting, the easiest way to begin is by downloading the
リリース、障害情報などのサービスのお知らせ
最新の人気エントリーの配信
処理を実行中です
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く