Name | Last modified | Size | License | |
---|---|---|---|---|
Parent Directory | ||||
cached-property | - | |||
grpcio | - | |||
h5py | - | |||
keras-applications | - | |||
keras-preprocessing | - | |||
numpy | - | |||
scipy | - | |||
six | - | |||
tensorflow-aarch64 | - |
Release 2.11.0
Breaking Changes
The
tf.keras.optimizers.Optimizer
base class now points to the new Keras optimizer, while the old optimizers have been moved to thetf.keras.optimizers.legacy
namespace.
If you find your workflow failing due to this change, you may be facing one of the following issues:
- Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to
tf.keras.optimizer.legacy.XXX
(e.g.tf.keras.optimizer.legacy.Adam
). - TF1 compatibility. The new optimizer,
tf.keras.optimizers.Optimizer
, does not support TF1 any more, so please use the legacy optimizertf.keras.optimizer.legacy.XXX
. We highly recommend migrating your workflow to TF2 for stable support and new features. - Old optimizer API not found. The new optimizer,
tf.keras.optimizers.Optimizer
, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer. - Learning rate schedule access. When using a
tf.keras.optimizers.schedules.LearningRateSchedule
, the new optimizer’slearning_rate
property returns the current learning rate value instead of aLearningRateSchedule
object as before. If you need to access theLearningRateSchedule
object, please useoptimizer._learning_rate
. - If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass
tf.keras.optimizer.legacy.XXX
. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo - Errors, such as
Cannot recognize variable...
. The new optimizer requires all optimizer variables to be created at the firstapply_gradients()
orminimize()
call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please calloptimizer.build(model.trainable_variables)
before the training loop. - Timeout or performance loss. We don’t anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.
The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example,
tf.keras.optimizers.Adafactor
) will only be implemented based on the newtf.keras.optimizers.Optimizer
base class.- Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to
tensorflow/python/keras
code is a legacy copy of Keras since the TensorFlow v2.7 release, and will be deleted in the v2.12 release. Please remove any import oftensorflow.python.keras
and use the public API withfrom tensorflow import keras
orimport tensorflow as tf; tf.keras
.
Major Features and Improvements
tf.lite
:- New operations supported:
tf.math.unsorted_segment_sum
,tf.atan2
andtf.sign
. - Updates to existing operations:
tfl.mul
now supports complex32 inputs.
- New operations supported:
tf.experimental.StructuredTensor
:- Introduced
tf.experimental.StructuredTensor
, which provides a flexible and TensorFlow-native way to encode structured data such as protocol buffers or pandas dataframes.
- Introduced
tf.keras
:- Added a new
get_metrics_result()
method totf.keras.models.Model
.- Returns the current metrics values of the model as a dict.
- Added a new group normalization layer -
tf.keras.layers.GroupNormalization
. - Added weight decay support for all Keras optimizers via the
weight_decay
argument. - Added the Adafactor optimizer -
tf.keras.optimizers.Adafactor
. - Added
warmstart_embedding_matrix
totf.keras.utils
.- This utility can be used to warmstart an embedding matrix, so you reuse previously-learned word embeddings when working with a new set of words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
- Added a new
tf.Variable
:- Added
CompositeTensor
as a base class toResourceVariable
.- This allows
tf.Variable
s to be nested intf.experimental.ExtensionType
s.
- This allows
- Added a new constructor argument
experimental_enable_variable_lifting
totf.Variable
, defaulting toTrue
.- When it’s set to
False
, the variable won’t be lifted out oftf.function
; thus it can be used as atf.function
-local variable: during each execution of thetf.function
, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++. Currently,experimental_enable_variable_lifting=False
only works on non-XLA devices (for example, undertf.function(jit_compile=False)</tt>). * TF SavedModel: ** Added
fingerprint.pb@ to the SavedModel directory. Thefingerprint.pb
file is a protobuf containing the “fingerprint” of the SavedModel. See the RFC for more details regarding its design and properties.
- When it’s set to
- Added
- TF pip:
- Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the Windows-native pip packages for
tensorflow
ortensorflow-cpu
would install Intel’stensorflow-intel
package. These packages are provided on an as-is basis. TensorFlow will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.
- Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the Windows-native pip packages for
Bug Fixes and Other Changes
tf.image
:- Added an optional parameter
return_index_map
totf.image.ssim
, which causes the returned value to be the local SSIM map instead of the global mean.
- Added an optional parameter
- TF Core:
tf.custom_gradient
can now be applied to functions that accept “composite” tensors, such astf.RaggedTensor
, as inputs.- Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
experimental_follow_type_hints
fortf.function
has been deprecated. Pleaseuse input_signature
orreduce_retracing
to minimize retracing.
tf.SparseTensor
:- Introduced
set_shape
, which sets the static dense shape of the sparse tensor and has the same semantics astf.Tensor.set_shape
.
- Introduced
Security
- TF is currently using giflib 5.2.1 which has CVE-2022-28506. TF is not affected by the CVE as it does not use
DumpScreen2RGB
at all. - Fixes an OOB seg fault in
DynamicStitch
due to missing validation (CVE-2022-41883) - Fixes an overflow in
tf.keras.losses.poisson
(CVE-2022-41887) - Fixes a heap OOB failure in
ThreadUnsafeUnigramCandidateSampler
caused by missing validation (CVE-2022-41880) - Fixes a segfault in
ndarray_tensor_bridge
(CVE-2022-41884) - Fixes an overflow in
FusedResizeAndPadConv2D
(CVE-2022-41885) - Fixes a overflow in
ImageProjectiveTransformV2
(CVE-2022-41886) - Fixes an FPE in
tf.image.generate_bounding_box_proposals
on GPU (CVE-2022-41888) - Fixes a segfault in
pywrap_tfe_src
caused by invalid attributes (CVE-2022-41889) - Fixes a
CHECK
fail inBCast
(CVE-2022-41890) - Fixes a segfault in
TensorListConcat
(CVE-2022-41891) - Fixes a
CHECK_EQ
fail inTensorListResize
(CVE-2022-41893) - Fixes an overflow in
CONV_3D_TRANSPOSE
on TFLite (CVE-2022-41894) - Fixes a heap OOB in
MirrorPadGrad
(CVE-2022-41895) - Fixes a crash in
Mfcc
(CVE-2022-41896) - Fixes a heap OOB in
FractionalMaxPoolGrad
(CVE-2022-41897) - Fixes a
CHECK
fail inSparseFillEmptyRowsGrad
(CVE-2022-41898) - Fixes a
CHECK
fail inSdcaOptimizer
(CVE-2022-41899) - Fixes a heap OOB in
FractionalAvgPool
andFractionalMaxPool
(CVE-2022-41900) - Fixes a
CHECK_EQ
inSparseMatrixNNZ
(CVE-2022-41901) - Fixes an OOB write in grappler (CVE-2022-41902)
- Fixes a overflow in
ResizeNearestNeighborGrad
(CVE-2022-41907) - Fixes a
CHECK
fail inPyFunc
(CVE-2022-41908) - Fixes a segfault in
CompositeTensorVariantToComponents
(CVE-2022-41909) - Fixes a invalid char to bool conversion in printing a tensor (CVE-2022-41911)
- Fixes a heap overflow in
QuantizeAndDequantizeV2
(CVE-2022-41910) - Fixes a
CHECK
failure inSobolSample
via missing validation (CVE-2022-35935) - Fixes a
CHECK
fail inTensorListScatter
andTensorListScatterV2
in eager mode (CVE-2022-35935)
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika