Batch Data Sources

Batch data sources allow you to configure connections to data-at-rest sources of data.

To define a batch data source, create a configuration object that connects to the raw data source.

Batch data sources share three common parameter:

  • name: A unique data source identifier used to address it from a feature set object, may contain only characters, numbers and _.
  • description: A general description.
  • date_created_column: Used to filter the data by the batch's start time/end time. date_created_column must be present in the database. This column must hold the timestamp which represents each records time.

❗️

date_created_column values must be monotonically increasing.


Registering new data sources

When registering a batch data source, the Qwak System will try to validate it, meaning it will try to fetch a sample to verify that the system can query the data source.

Additionally, Batch data sources support the following validation function:

def get_sample(self, number_of_rows: int = 10) -> DataFrame:

Usage example:

from qwak.feature_store.data_sources import ParquetSource, AnonymousS3Configuration

parquet_source = ParquetSource(
    name='parquet_source',
    description='a parquet source description',
    date_created_column='date_created',
    path="s3://bucket-name/data.parquet",
    filesystem_configuration=AnonymousS3Configuration()
)

pandas_df = parquet_source.get_sample()

When invoking this function the Qwak System will validate the data source before returning a Pandas DataFrame, meaning that if an error occurred while trying to fetch a sample, the system indicate at which stage it failed, for example it can fail:

  • When connecting to the specified bucket.
  • If the date_created_column is not of the right type or doesn't exist.


1. Snowflake

In order to create a Snowflake connection, before creating a connector make sure you have the following:

  1. Snowflake User (Read-Only access required)
  2. Connectivity between Qwak environment and Snowflake host
    There are two distinct ways to use the Snowflake connector:
  3. Providing table.
from qwak.feature_store.data_sources import SnowflakeSource

snowflake_source = SnowflakeSource(
    name='snowflake_source',
    description='a snowflake source description',
    date_created_column='insert_date_column',
    host='<SnowflakeAddress/DNS:port>',
    username_secret_name='qwak_secret_snowflake_user', # use secret service
    password_secret_name='qwak_secret_snowflake_password', # use secret service
    database='db_name',
    schema='schema_name',
    warehouse='data_warehouse_name',
    table='snowflake_table'
)
  1. Providing query.
from qwak.feature_store.data_sources import SnowflakeSource

snowflake_source = SnowflakeSource(
    name='snowflake_source',
    description='a snowflake source description',
    date_created_column='insert_date_column',
    host='<SnowflakeAddress/DNS:port>',
    username_secret_name='qwak_secret_snowflake_user', # use secret service
    password_secret_name='qwak_secret_snowflake_password', # use secret service
    database='db_name',
    schema='schema_name',
    warehouse='data_warehouse_name',
    query='select feature1, feature2 from snowflake_table'
)


2. BigQuery

To access a BigQuery source, please download the credentials.json file from GCP to your the local file system.

Permissions

The following permissions must be applied to the provided credentials in the credentials.json file.

bigquery.tables.create
bigquery.tables.getData  
bigquery.tables.get
bigquery.readsessions.*  
bigquery.jobs.create

Uploading Credentials

Once you've downloaded credentials.json, encode it with base64 and set it as a Qwak secret using the Qwak Secret Service.

import json
import base64
from qwak.clients.secret_service import SecretServiceClient

with open('/path/of/credentials/credentials.json', 'r') as f:
    creds = json.load(f)

creds64 = base64.b64encode(json.dumps(creds).encode('utf-8')).decode('utf-8')

secrets_service = SecretServiceClient()
secrets_service.set_secret(name='qwak_secret_big_query_creds', value=creds64)

Connecting to BigQuery

There are two distinct ways to use the BigQuery connector:

1. Providing dataset and table.

from qwak.feature_store.data_sources import BigquerySource

some_bigquery_source = BigquerySource(
    name='big_query_source',
    description='a bigquery source description',
    date_created_column='date_created',
    credentials_secret_name='qwak_secret_big_query_creds',
    dataset='dataset_name',
    table='table_name',
    project='project_id',
  	materialization_project='materialization_project_name'
    parent_project='parent_project',
    views_enabled=False
)

2. Providing sql.

from qwak.feature_store.data_sources import BigquerySource

big_query_source = BigquerySource(
    name='big_query',
    description='a big query source description',
    date_created_column='date_created',
    credentials_secret_name='bigquerycred',
    project='project_id',
    sql="""SELECT l.id as id, 
          SUM(l.feature1) as feature1, 
          SUM(r.feature2) as feature2,
          MAX(l.date_created) as date_created,
          FROM `project_id.dataset.left` AS l
          JOIN `project_id.dataset.right` as r
          ON r.id = l.id 
          GROUP BY id""",
    parent_project='',
    views_enabled=False
)


3. MongoDB

from qwak.feature_store.data_sources import MongoSource 

mongo_source = MongoSource(
    name='mongo_source',
    description='a mongo source description',
    date_created_column='insert_date_column',
    hosts='<MongoAddress/DNS:Port>',
    username_secret_name='qwak_secret_mongodb_user', #uses the Qwak Secret Service
    password_secret_name='qwak_secret_mongodb_pass', #uses the Qwak Secret Service
    database='db_name',
    collection='collection_name',
    connection_params='authSource=admin'
)


4. Amazon S3 Stored Files

Ingesting Data from Parquet Files

AWS S3 filesystem data sources support explicit credentials for a custom bucket (default: qwak bucket).
To access more of your data from a different S3 bucket, use this optional configuration.
Once creating the relevant secrets using the Qwak-CLI you can use:

from qwak.feature_store.data_sources import ParquetSource, AwsS3FileSystemConfiguration

parquet_source = ParquetSource(
    name='my_source',
    description='some s3 data source',
    date_created_column='DATE_CREATED',
    path='s3://mybucket/parquet_test_data.parquet',
    filesystem_configuration=AwsS3FileSystemConfiguration(
        access_key_secret_name='mybucket_access_key',
        secret_key_secret_name='mybucket_secret_key',
        bucket='mybucket'
    )
)

🚧

Timestamp Column

Ensure that the timestamp column in your Parquet file(s) is represented using the appropriate PyArrow timestamp data type with microsecond precision.

You can achieve this by casting the timestamp column to the desired precision. Here's an example:

timestamp_column_microseconds = timestamp_column.cast('timestamp[us]')

In the above code snippet, timestamp_column_microseconds refers to the modified timestamp column with microsecond precision. This column represents information like the date and time a record was created, denoted as date_created.

Using Pandas timestamp data types, like datetime[ns] or int64 will result in an error when fetching data from the Parquet source.


Ingesting Data from CSV Files

CSV access works like reading a Parquet file from S3. We either specify the AWS access keys as environment variables or access a public object.

from qwak.feature_store.data_sources import CsvSource, AnonymousS3Configuration

csv_source = CsvSource(
    name='csv_source',
    description='a csv source description',
    date_created_column='date_created',
    path="s3://bucket-name/data.csv",
    filesystem_configuration=AnonymousS3Configuration(),
    quote_character='"',
    escape_character='"'
)

📘

Public S3 bucket access

When using public any bucket such as qwak-public, nyc-tlc , etc.. ,
use the AnonymousS3Configuration to access without credentials as shown in example

🚧

Default timestamp format for date_created_column in CSV files should be yyyy-MM-dd'T'HH:mm:ss, optionally with [.SSS][XXX]. For example 2020-01-01T00:00:00


Accessing Private Amazon S3 Buckets in Data Sources

To securely leverage data stored in Amazon S3 buckets within Qwak's feature store, we support two robust authentication methods. This guide provides a comprehensive overview of setting up access to private S3 buckets, ensuring that your data remains secure while being fully accessible for your data operations.

  1. IAM Role ARN Based Authentication

This method allows Qwak to assume an IAM role with permissions to access your S3 bucket. Create an IAM role in AWS with the necessary permissions to access the S3 bucket. For a step-by-step guide, refer to Configuring IAM Roles for S3 Access.

from qwak.feature_store.sources.source_authentication import AwsAssumeRoleAuthentication

aws_authentication = AwsAssumeRoleAuthentication(role_arn='<YOUR_IAM_ROLE_ARN')

  1. Credentials Based Authentication

For scenarios where IAM role-based access isn't preferred, use your AWS access and secret keys, stored securely in Qwak's Secrets Management Service. Save your AWS access_key and secret_key in Qwak's Secret Management.

from qwak.feature_store.data_sources.source_authentication import AwsCredentialsAuthentication

aws_authentication = AwsCredentialsAuthentication(access_key_secret_name='your-access-key-qwak-secret', 
                                                  secret_key_secret_name='your-secret-key-qwak-secret')

After setting up your authentication method, use the aws_authentication object to configure your CSV or Parquet data source, by assigning it to the filesystem_configuration parameter, as in the example below:

from qwak.feature_store.data_sources.batch.csv import CsvSource
from qwak.feature_store.data_sources.batch.parquet import ParquetSource

csv_source = CsvSource(
   name='name_with_underscores',
   description='',
   date_created_column='your_date_related_column',        
   path='s3://s3...',
   quote_character="'",
   escape_character="\\",
   filesystem_configuration= aws_authentication
)


5. Redshift

In order to connect to Redshift source, you will need to grant access either using AWS Access Key & Secret Key or using IAM Role.

from qwak.feature_store.data_sources import RedshiftSource

redshift_source = RedshiftSource(
    name='my_source',
    date_created_column='DATE_CREATED',
    description='Some Redshift Source',
    url="company-redshift-cluster.xyz.us-east-1.redshift.amazonaws.com:5439/DBName",
    db_table='my_table',
    query='base query when fetching data from Redshift', # Must choose either db_table or query
    iam_role_arn='arn:aws:iam::123456789:role/assumed_role_redshift',
    db_user='dbuser_name',
)


6. MySQL

from qwak.feature_store.data_sources import MysqlSource

mysql_source = MysqlSource(
    name='mysql_source',
    description='a mysql source description',
    date_created_column='date_created',
    username_secret_name='qwak_secret_mysql_user', # uses the Qwak Secret Service
    password_secret_name='qwak_secret_mysql_pass', # uses the Qwak Secret Service
    url='<MysqlAddress/DNS:Port>',
    db_table='db.table_name',  # i.e database1.table1
    query='base query when fetching data from mysql'  # Must choose either db_table or query
)


7. Postgres

from qwak.feature_store.data_sources import PostgresqlSource

postgres_source = PostgresqlSource(
    name='postgresql_source',
    description='a postgres source description',
    date_created_column='date_created',
    username_secret_name='qwak_secret_postgres_user', # uses the Qwak Secret Service
    password_secret_name='qwak_secret_postgres_pass', # uses the Qwak Secret Service
    url='<PostgresqlAddress/DNS:Port/DBName>',
    db_table='schema.table_name',  # default schema: public
    query='base query when fetching data from postgres'  # Must choose either db_table or query
)


8. Clickhouse

from qwak.feature_store.data_sources import ClickhouseSource

clickhouse_source = ClickhouseSource(
    name='clickhouse_source',
    description='a clickhouse source description',
    date_created_column='date_created', # Has to be of format DateTime64
    username_secret_name='qwak_secret_clickhouse_user', # uses the Qwak Secret Service
    password_secret_name='qwak_secret_clickhouse_pass', # uses the Qwak Secret Service
    url='<ClickhouseAddress/DNS:Port/DBName>', # datatabase name is optional
    db_table='database_name.table_name',  # default database: default
    query='base query when fetching data from clickhouse'  # Must choose either db_table or query
)


9. Vertica

from qwak.feature_store.data_sources import VerticaSource

vertica_source = VerticaSource(
    name='vertica_source',
    description='a vertica source description',
    date_created_column='date_created',
    username_secret_name='qwak_secret_vertica_user', # uses the Qwak Secret Service
    password_secret_name='qwak_secret_vertica_pass', # uses the Qwak Secret Service
    host='VerticaHost without :port suffix',
    port=5444,
    database='MyVerticaDatabase',
    schema='MyVerticaSchema e.g: public',
    table='table_name'
)


10. AWS Athena

The Athena source is used to connect Qwak to Amazon Athena, allowing users to query and ingest data seamlessly

from qwak.feature_store.sources.data_sources import AthenaSource
from qwak.feature_store.sources.source_authentication import AwsAssumeRoleAuthentication
from qwak.feature_store.sources.time_partition_columns import DatePartitionColumns

athena_source = AthenaSource(
    name='my_athena_source',
    description='my Athena source description',
    date_created_column='date_created',
    aws_region='us-east-1',
    s3_output_location='s3://some-athena-queries-bucket/',
    workgroup='some-workgroup',
    query='SELECT * FROM "db"."table"',
    aws_authentication=AwsAssumeRoleAuthentication(role_arn='some_role_arn'),
    time_partition_columns=DatePartitionColumns(date_column_name='date_pt', date_format='%Y%m%d'),
)

📘

Workgroups

By default, your default workgroup in Athena is called primary. However, for optimal organization and resource management, it's recommended to establish a dedicated workgroup specifically for handling FeatureSet-related queries. This separation ensures that queries related to Qwak's FeatureSets are isolated from other users or applications utilizing AWS Athena, allowing for better debugging, query prioritization, and enhanced governance.

The Data Source configuration supports 2 ways of authenticating to AWS Athena

aws_authentication: AwsAuthentication

  • Description: Authentication method to be used.

  • Mandatory: Yes

  • Options:

    • AwsAssumeRoleAuthentication

      • Description: Authentication using assumed role.

      • Fields:

        • role_arn: str: Mandatory
      • Example:

        from qwak.feature_store.sources.source_authentication import AwsAssumeRoleAuthentication
        
        aws_authentication = AwsAssumeRoleAuthentication(role_arn='some_role_arn')
        
    • AwsCredentialsAuthentication

      • Description: Authentication using AWS credentials in Qwak secrets.

      • Fields:

        • access_key_secret_name: str: Mandatory
        • secret_key_secret_name: str: Mandatory
      • Example:

        from qwak.feature_store.data_sources.source_authentication import AwsCredentialsAuthentication
        
        aws_authentication = AwsCredentialsAuthentication(access_key_secret_name='your-access-key-qwak-secret', 
                                                          secret_key_secret_name='your-secret-key-qwak-secret')
        

Define date partition columns (Optional)

time_partition_columns: TimePartitionColumns

  • Description: Define date partition columns correlated with date_created_column.
  • Optional: Yes (Highly recommended)
  • Options:
    • DatePartitionColumns
      • Fields:
        • date_column_name: str: Mandatory
        • date_format: str: Mandatory
      • Example:
      from qwak.feature_store.sources.time_partition_columns import DatePartitionColumns
      time_partition_columns = DatePartitionColumns(date_column_name='date_pt', date_format='%Y%m%d')
      
    • TimeFragmentedPartitionColumns
      • Fields:
        • year_partition_column: YearFragmentColumn: Mandatory
        • month_partition_column: MonthFragmentColumn: Optional (Must be set if day_partition_column is set)
        • day_partition_column: DayFragmentColumn: Optional
      • Examples:
        • For year=2022/month=01/day=05:
          from qwak.feature_store.sources.time_partition_columns import (
              ColumnRepresentation,
              TimeFragmentedPartitionColumns,
              YearFragmentColumn,
              MonthFragmentColumn,
              DayFragmentColumn,
          )
          time_partition_columns = TimeFragmentedPartitionColumns(
              YearFragmentColumn("year", ColumnRepresentation.NumericColumnRepresentation),
              MonthFragmentColumn("month", ColumnRepresentation.NumericColumnRepresentation),
              DayFragmentColumn("day", ColumnRepresentation.NumericColumnRepresentation),
          )
          
        • For year=2022/month=January/day=5:
          from qwak.feature_store.sources.time_partition_columns import (
              ColumnRepresentation,
              DayFragmentColumn,
              MonthFragmentColumn,
              TimeFragmentedPartitionColumns,
              YearFragmentColumn,
          )
          time_partition_columns = TimeFragmentedPartitionColumns(
              YearFragmentColumn("year", ColumnRepresentation.NumericColumnRepresentation),
              MonthFragmentColumn("month", ColumnRepresentation.TextualColumnRepresentation),
              DayFragmentColumn("day", ColumnRepresentation.NumericColumnRepresentation),
          )
          

Default timestamp format for date_created_column should be yyyy-MM-dd'T'HH:mm:ss, optionally with [.SSS][XXX]. For example 2020-01-01T00:00:00