ml.utils.caching

Defines a wrapper for caching function calls to a file location.

class ml.utils.caching.cached_object(cache_key: str, ext: Literal['pkl', 'json'] = 'pkl', ignore: bool = False, cache_obj: bool = True)[source]

Bases: object

Defines a wrapper for caching function calls to a file location.

This is just a convenient way of caching heavy operations to disk, using a specific key.

Parameters:
  • cache_key – The key to use for caching the file

  • ext – The caching type to use (JSON or pickling)

  • ignore – Should the cache be ignored?

  • cache_obj – If set, keep the object around to avoid deserializing it when it is accessed again

class ml.utils.caching.DictIndex(items: Mapping[Tk, Sequence[Tv]])[source]

Bases: Generic[Tk, Tv]

Indexes a dictionary with values that are lists.

This lazily indexes all the values in the provided dictionary, flattens them out and allows them to be looked up by a specific index. This is analogous to PyTorch’s ConcatDataset.

Parameters:

items – The dictionary to index