Skip to content

getml.pipeline.Columns

Columns(
    pipeline: str,
    targets: Sequence[str],
    peripheral: Sequence[Placeholder],
    data: Optional[Sequence[Column]] = None,
)

Container which holds a pipeline's columns. These include the columns for which importance can be calculated, such as the ones with roles as categorical, numerical and text. The rest of the columns with roles time_stamp, join_key, target, unused_float and unused_string can not have importance of course.

Columns can be accessed by name, index or with a NumPy array. The container supports slicing and is sort- and filterable. Further, the container holds global methods to request columns' importances and apply a column selection to data frames provided to the pipeline.

PARAMETER DESCRIPTION
pipeline

The id of the pipeline.

TYPE: str

targets

The names of the targets used for this pipeline.

TYPE: Sequence[str]

peripheral

The abstract representation of peripheral tables used for this pipeline.

TYPE: Sequence[Placeholder]

data

The columns to be stored in the container. If not provided, they are obtained from the Engine.

TYPE: Optional[Sequence[Column]] DEFAULT: None

Note

The container is an iterable. So, in addition to filter you can also use python list comprehensions for filtering.

Example
all_my_columns = my_pipeline.columns

first_column = my_pipeline.columns[0]

all_but_last_10_columns = my_pipeline.columns[:-10]

important_columns = [column for column in my_pipeline.columns if
column.importance > 0.1]

names, importances = my_pipeline.columns.importances()

# Drops all categorical and numerical columns that are not # in the
top 20%. new_container = my_pipeline.columns.select(
    container, share_selected_columns=0.2,
)
Source code in getml/pipeline/columns.py
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
def __init__(
    self,
    pipeline: str,
    targets: Sequence[str],
    peripheral: Sequence[Placeholder],
    data: Optional[Sequence[Column]] = None,
) -> None:
    if not isinstance(pipeline, str):
        raise ValueError("'pipeline' must be a str.")

    if not _is_typed_list(targets, str):
        raise TypeError("'targets' must be a list of str.")

    self.pipeline = pipeline

    self.targets = targets

    self.peripheral = peripheral

    self.peripheral_names = [p.name for p in self.peripheral]

    if data is not None:
        self.data = data
    else:
        self._load_columns()

names property

names: List[str]

Holds the names of a Pipeline's columns.

RETURNS DESCRIPTION
List[str]

List containing the names.

Note

The order corresponds to the current sorting of the container.

filter

filter(conditional: Callable[[Column], bool]) -> Columns

Filters the columns container.

PARAMETER DESCRIPTION
conditional

A callable that evaluates to a boolean for a given item.

TYPE: Callable[[Column], bool]

RETURNS DESCRIPTION
Columns

A container of filtered Columns.

Example
important_columns = my_pipeline.columns.filter(lambda column: column.importance > 0.1)
peripheral_columns = my_pipeline.columns.filter(lambda column: column.marker == "[PERIPHERAL]")
Source code in getml/pipeline/columns.py
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
def filter(self, conditional: Callable[[Column], bool]) -> Columns:
    """
    Filters the columns container.

    Args:
        conditional:
            A callable that evaluates to a boolean for a given item.

    Returns:
        A container of filtered Columns.

    ??? example
        ```python
        important_columns = my_pipeline.columns.filter(lambda column: column.importance > 0.1)
        peripheral_columns = my_pipeline.columns.filter(lambda column: column.marker == "[PERIPHERAL]")
        ```
    """
    columns_filtered = [column for column in self.data if conditional(column)]
    return self._make_columns(columns_filtered)

importances

importances(
    target_num: int = 0, sort: bool = True
) -> Tuple[NDArray[str_], NDArray[float_]]

Returns the data for the column importances.

Column importances extend the idea of column importances to the columns originally inserted into the pipeline. Each column is assigned an importance value that measures its contribution to the predictive performance. All columns importances add up to 1.

The importances can be calculated for columns with roles such as categorical, numerical and text. The rest of the columns with roles time_stamp, join_key, target, unused_float and unused_string can not have importance of course.

PARAMETER DESCRIPTION
target_num

Indicates for which target you want to view the importances. (Pipelines can have more than one target.)

TYPE: int DEFAULT: 0

sort

Whether you want the results to be sorted.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION
NDArray[str_]

The first array contains the names of the columns.

NDArray[float_]

The second array contains their importances. By definition, all importances add up to 1.

Source code in getml/pipeline/columns.py
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
def importances(
    self, target_num: int = 0, sort: bool = True
) -> Tuple[NDArray[np.str_], NDArray[np.float_]]:
    """
    Returns the data for the column importances.

    Column importances extend the idea of column importances
    to the columns originally inserted into the pipeline.
    Each column is assigned an importance value that measures
    its contribution to the predictive performance. All
    columns importances add up to 1.

    The importances can be calculated for columns with
    [`roles`][getml.data.roles] such as [`categorical`][getml.data.roles.categorical],
    [`numerical`][getml.data.roles.numerical] and [`text`][getml.data.roles.text].
    The rest of the columns with roles [`time_stamp`][getml.data.roles.time_stamp],
    [`join_key`][getml.data.roles.join_key], [`target`][getml.data.roles.target],
    [`unused_float`][getml.data.roles.unused_float] and
    [`unused_string`][getml.data.roles.unused_string] can not have importance of course.

    Args:
        target_num:
            Indicates for which target you want to view the
            importances.
            (Pipelines can have more than one target.)

        sort:
            Whether you want the results to be sorted.

    Returns:
        The first array contains the names of the columns.
        The second array contains their importances. By definition, all importances add up to 1.
    """

    # ------------------------------------------------------------

    descriptions, importances = self._get_column_importances(
        target_num=target_num, sort=sort
    )

    # ------------------------------------------------------------

    names = np.asarray(
        [d["marker_"] + " " + d["table_"] + "." + d["name_"] for d in descriptions]
    )

    # ------------------------------------------------------------

    return names, importances

select

select(
    container: Union[Container, StarSchema, TimeSeries],
    share_selected_columns: float = 0.5,
) -> Container

Returns a new data container with all insufficiently important columns dropped.

PARAMETER DESCRIPTION
container

The container containing the data you want to use.

TYPE: Union[Container, StarSchema, TimeSeries]

share_selected_columns

The share of columns to keep. Must be between 0.0 and 1.0.

TYPE: float DEFAULT: 0.5

RETURNS DESCRIPTION
Container

A new container with the columns dropped.

Source code in getml/pipeline/columns.py
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
def select(
    self,
    container: Union[Container, StarSchema, TimeSeries],
    share_selected_columns: float = 0.5,
) -> Container:
    """
    Returns a new data container with all insufficiently important columns dropped.

    Args:
        container:
            The container containing the data you want to use.

        share_selected_columns: The share of columns
            to keep. Must be between 0.0 and 1.0.

    Returns:
        A new container with the columns dropped.
    """

    # ------------------------------------------------------------

    if isinstance(container, (StarSchema, TimeSeries)):
        data = self.select(
            container.container, share_selected_columns=share_selected_columns
        )
        new_container = deepcopy(container)
        new_container._container = data
        return new_container

    # ------------------------------------------------------------

    if not isinstance(container, Container):
        raise TypeError(
            "'container' must be a getml.data.Container, "
            + "a getml.data.StarSchema or a getml.data.TimeSeries"
        )

    if not isinstance(share_selected_columns, numbers.Real):
        raise TypeError("'share_selected_columns' must be a real number!")

    if share_selected_columns < 0.0 or share_selected_columns > 1.0:
        raise ValueError("'share_selected_columns' must be between 0 and 1!")

    # ------------------------------------------------------------

    descriptions, _ = self._get_column_importances(target_num=-1, sort=True)

    # ------------------------------------------------------------

    num_keep = int(np.ceil(share_selected_columns * len(descriptions)))

    keep_columns = descriptions[:num_keep]

    # ------------------------------------------------------------

    subsets = {
        k: _drop(v, keep_columns, k, POPULATION)
        for (k, v) in container.subsets.items()
    }

    peripheral = {
        k: _drop(v, keep_columns, k, PERIPHERAL)
        for (k, v) in container.peripheral.items()
    }

    # ------------------------------------------------------------

    new_container = Container(**subsets)
    new_container.add(**peripheral)
    new_container.freeze()

    # ------------------------------------------------------------

    return new_container

sort

sort(
    by: Optional[str] = None,
    key: Optional[Callable[[Column], Any]] = None,
    descending: Optional[bool] = None,
) -> Columns

Sorts the Columns container. If no arguments are provided the container is sorted by target and name.

PARAMETER DESCRIPTION
by

The name of field to sort by. Possible fields: - name(s) - table(s) - importances(s)

TYPE: Optional[str] DEFAULT: None

key

A callable that evaluates to a sort key for a given item.

TYPE: Optional[Callable[[Column], Any]] DEFAULT: None

descending

Whether to sort in descending order.

TYPE: Optional[bool] DEFAULT: None

RETURNS DESCRIPTION
Columns

A container of sorted columns.

Example
by_importance = my_pipeline.columns.sort(key=lambda column: column.importance)
Source code in getml/pipeline/columns.py
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
def sort(
    self,
    by: Optional[str] = None,
    key: Optional[Callable[[Column], Any]] = None,
    descending: Optional[bool] = None,
) -> Columns:
    """
    Sorts the Columns container. If no arguments are provided the
    container is sorted by target and name.

    Args:
        by:
            The name of field to sort by. Possible fields:
                - name(s)
                - table(s)
                - importances(s)
        key:
            A callable that evaluates to a sort key for a given item.
        descending:
            Whether to sort in descending order.

    Returns:
            A container of sorted columns.

    ??? example
        ```python
        by_importance = my_pipeline.columns.sort(key=lambda column: column.importance)
        ```
    """

    reverse = False if descending is None else descending

    if (by is not None) and (key is not None):
        raise ValueError("Only one of `by` and `key` can be provided.")

    if key is not None:
        columns_sorted = sorted(self.data, key=key, reverse=reverse)
        return self._make_columns(columns_sorted)

    if by is None:
        columns_sorted = sorted(
            self.data, key=lambda column: column.name, reverse=reverse
        )
        columns_sorted.sort(key=lambda column: column.target)
        return self._make_columns(columns_sorted)

    if re.match(pattern="names?$", string=by):
        columns_sorted = sorted(
            self.data, key=lambda column: column.name, reverse=reverse
        )
        return self._make_columns(columns_sorted)

    if re.match(pattern="tables?$", string=by):
        columns_sorted = sorted(
            self.data,
            key=lambda column: column.table,
        )
        return self._make_columns(columns_sorted)

    if re.match(pattern="importances?$", string=by):
        reverse = True if descending is None else descending
        columns_sorted = sorted(
            self.data, key=lambda column: column.importance, reverse=reverse
        )
        return self._make_columns(columns_sorted)

    raise ValueError(f"Cannot sort by: {by}.")

to_pandas

to_pandas() -> DataFrame

Returns all information related to the columns in a pandas data frame.

Source code in getml/pipeline/columns.py
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
def to_pandas(self) -> pd.DataFrame:
    """Returns all information related to the columns in a pandas data frame."""

    names, markers, tables, importances, targets = (
        self._pivot(field)
        for field in ["name", "marker", "table", "importance", "target"]
    )

    data_frame = pd.DataFrame(index=np.arange(len(self.data)))

    data_frame["name"] = names

    data_frame["marker"] = markers

    data_frame["table"] = tables

    data_frame["importance"] = importances

    data_frame["target"] = targets

    return data_frame