neuralset.extractors.meta.TimeAggregatedExtractor

pydantic model neuralset.extractors.meta.TimeAggregatedExtractor[source][source]

Remove the time dimension of a dynamic extractor, either by summing/averaging or by selecting the first, middle or last time point.

NOTE: This is not exactly a static extractor because its output depends on the start and duration of the window (whereas static extractors only depend on the event). Hence, the get_static method is not implemented.

Parameters:
  • time_aggregation (str) – How to aggregate the time dimension. Can be “sum”, “mean”, “first”, “middle”, “last” or an integer.

  • n_groups_concat (int | None) – If provided, the time dimension is divided into n_groups equal parts and the aggregation is carried out within each group, before being concatenated.

  • extractor (BaseExtractor) – The extractor to aggregate.

Fields:
field time_aggregation: Literal['sum', 'mean', 'first', 'last'] = 'mean'[source]
field n_groups_concat: Annotated[int, Gt(gt=0)] | None = None[source]
field event_types: str | tuple[str, ...] = 'Event'[source]
field extractor: BaseExtractor [Required][source]
prepare(events: DataFrame) None[source][source]

Pre-compute and cache extractor data for a collection of events.

This method triggers _get_data on every matching event so that expensive computation (e.g. model inference) is done once and cached. It then calls the extractor on a single event to populate the output shape, which is needed when allow_missing=True.

Call prepare before using the extractor in a dataloader.

Parameters:

obj (DataFrame or sequence of Event or sequence of Segment) – The structure containing the events. When calling prepare on several objects, prefer passing a list of events or segments over a DataFrame to avoid redundant conversion overhead.

get_static(*args: Any, **kwargs: Any) Tensor[source][source]

Return a single feature vector for the given event.

Override this method in subclasses to produce a static (non-temporal) embedding for one event. The returned tensor should have no time dimension — temporal wrapping is handled by BaseStatic automatically.

Parameters:

event (Event) – The event to extract a feature from.

Returns:

A tensor of shape (*feature_shape,) (no time axis).

Return type:

torch.Tensor

requirements: tp.ClassVar[tuple[str, ...]] = ()[source]