Name | Last modified | Size | License | |
---|---|---|---|---|
Parent Directory | ||||
cached-property | - | |||
grpcio | - | |||
h5py | - | |||
keras-applications | - | |||
keras-preprocessing | - | |||
numpy | - | |||
scipy | - | |||
six | - | |||
tensorflow-aarch64 | - |
Release 2.8.0-rc0
Major Features and Improvements
tf.lite
:- Added TFLite builtin op support for the following TF ops:
tf.raw_ops.Bucketize
op on CPU.tf.where
op for data typestf.int32
/@tf.uint32@/@tf.int8@/@tf.uint8@/@tf.int64@.tf.random.normal
op for output data typetf.float32
on CPU.tf.random.uniform
op for output data typetf.float32
on CPU.tf.random.categorical
op for output data typetf.int64
on CPU.
- Added TFLite builtin op support for the following TF ops:
tensorflow.experimental.tensorrt
:conversion_params
is now deprecated insideTrtGraphConverterV2
in favor of direct arguments:max_workspace_size_bytes
,precision_mode
,minimum_segment_size
,maximum_cached_engines
,use_calibration
andallow_build_at_runtime
.- Added a new parameter called
save_gpu_specific_engines
to the.save()
function insideTrtGraphConverterV2
. WhenFalse
, the.save()
function won’t save any TRT engines that have been built. WhenTrue
(default), the original behavior is preserved. TrtGraphConverterV2
provides a new API called.summary()
which outputs a summary of the inference converted by TF-TRT. It namely shows each TRTEngineOp with their input(s)’and output(s)’ shape and dtype. A detailed version of the summary is available which prints additionally all the TensorFlow OPs included in each of the TRTEngineOPs.
tf.tpu.experimental.embedding
:tf.tpu.experimental.embedding.FeatureConfig
now takes an additional argumentoutput_shape
which can specify the shape of the output activation for the feature.tf.tpu.experimental.embedding.TPUEmbedding
now has the same behavior astf.tpu.experimental.embedding.serving_embedding_lookup
which can take arbitrary rank of dense and sparse tensor. For ragged tensor, though the input tensor remains to be rank 2, the activations now can be rank 2 or above by specifying the output shape in the feature config or via the build method.
Bug Fixes and Other Changes
tf.data
:- The optimization
parallel_batch
now becomes default if not disabled by users, which will parallelize copying of batch elements. - Added the ability for
TensorSliceDataset
to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
- The optimization
tf.lite
:- GPU
- Adds GPU Delegation support for serialization to Java API. This boosts initialization time upto 90% when OpenCL is available.
- Deprecated
Interpreter::SetNumThreads
, in favor ofInterpreterBuilder::SetNumThreads
.
- GPU
- Adds
tf.compat.v1.keras.utils.get_or_create_layer
to aid migration to TF2 by enabling tracking of nested keras models created in TF1-style, when used with thetf.compat.v1.keras.utils.track_tf1_style_variables
decorator. tf.keras
:- Preprocessing Layers
- Added a
tf.keras.layers.experimental.preprocessing.HashedCrossing
layer which applies the hashing trick to the concatenation of crossed scalar inputs. This provides a stateless way to try adding feature crosses of integer or string data to a model. - Removed
keras.layers.experimental.preprocessing.CategoryCrossing
. Users should migrate to theHashedCrossing
layer or usetf.sparse.cross
/@tf.ragged.cross@ directly. - Added additional
standardize
andsplit
modes toTextVectorization
.standardize="lower"
will lowercase inputs.standardize="string_punctuation"
will remove all puncuation.split="character"
will split on every unicode character.
- Added an
output_mode
argument to theDiscretization
andHashing
layers with the same semantics as other preprocessing layers. All categorical preprocessing layers now supportoutput_mode
. - All preprocessing layer output will follow the compute dtype of a
tf.keras.mixed_precision.Policy
, unless constructed withoutput_mode="int"
in which case output will betf.int64
. The output type of any preprocessing layer can be controlled individually by passing adtype
argument to the layer.
- Added a
tf.random.Generator
for keras initializers and all RNG code.- Added 3 new APIs for enable/disable/check the usage of
tf.random.Generator
in keras backend, which will be the new backend for all the RNG in Keras. We plan to switch on the new code path by default in tf 2.8, and the behavior change will likely to cause some breakage on user side (eg if the test is checking against some golden nubmer). These 3 APIs will allow user to disable and switch back to legacy behavior if they prefer. In future (eg tf 2.10), we expect to totally remove the legacy code path (stateful random Ops), and these 3 APIs will be removed as well.
- Added 3 new APIs for enable/disable/check the usage of
tf.keras.callbacks.experimental.BackupAndRestore
is now available astf.keras.callbacks.BackupAndRestore
. The experimental endpoint is deprecated and will be removed in a future release.tf.keras.experimental.SidecarEvaluator
is now available astf.keras.utils.SidecarEvaluator
. The experimental endpoint is deprecated and will be removed in a future release.- Metrics update and collection logic in default
Model.train_step()
is now customizable via overridingModel.compute_metrics()
. - Losses computation logic in default
Model.train_step()
is now customizable via overridingModel.compute_loss()
. jit_compile
added toModel.compile()
on an opt-in basis to compile the model’s training step with XLA. Note thatjit_compile=True
may not necessarily work for all models.
- Preprocessing Layers
- Add
tf.config.experimental.enable_op_determinism
, which makes TensorFlow ops run deterministically at the cost of performance. This is equivalent to setting the previously-existingTF_DETERMINISTIC_OPS
environmental variable to1
. The environmental variable is now deprecated, so theenable_op_determinism
function should be used instead.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
8bitmp3, Adam Lanicek, ag.ramesh, alesapin, Andrew Goodbody, annasuheyla, Ariel Elkin, Arnab Dutta, Ben Barsdell, bhack, cfRod, Chengji Yao, Christopher Bate, dan, Dan F-M, David Korczynski, DEKHTIARJonathan, dengzhiyuan, Deven Desai, Duncan Riach, Eli Osherovich, Ewout Ter Hoeven, ez2take, Faijul Amin, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, Georgiy Manuilov, Guilherme De Lázari, Guozhong Zhuang, H1Gdev, homuler, Hongxu Jia, Jacky_Yin, jayfurmanek, jgehw, Jhalak Patel, Jinzhe Zeng, Johan Gunnarsson, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, Kevin Cheng, Koan-Sin Tan, Kruglov-Dmitry, Kun Lu, Lemo, Lequn Chen, long.chen, Louis Sugy, Mahmoud Abuzaina, Mao, Marius Brehler, Mark Harfouche, Martin Patz, Maxiwell S. Garcia, Meenakshi Venkataraman, Michael Melesse, Mrinal Tyagi, Måns Nilsson, Nathan John Sircombe, Nathan Luehr, Nilesh Agarwalla, Oktay Ozturk, Patrice Vignola, Pawel-Polyai, Rama Ketineni, Ramesh Sampath, Reza Rahimi, Rob Suderman, Robert Kalmar, Rohit Santhanam, Sachin Muradi, Saduf2019, Samuel Marks, Shi,Guangyong, Sidong-Wei, Srinivasan Narayanamoorthy, Srishti Srivastava, Steven I Reeves, stevenireeves, Supernovae, Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Thomas Schmeyer, tilakrayal, Valery Mironov, Victor Guo, Vignesh Kothapalli, Vishnuvardhan Janapati, wamuir, Wang,Quintin, William Muir, William Raveane, Yash Goel, Yimei Sun, Yong Tang, Yuduo Wu