GloWGR¶
Data preparation and helper functions¶

glow.wgr.functions.
block_variants_and_samples
(variant_df: pyspark.sql.dataframe.DataFrame, sample_ids: List[str], variants_per_block: int, sample_block_count: int) > (<class 'pyspark.sql.dataframe.DataFrame'>, typing.Dict[str, typing.List[str]])[source]¶ Creates a blocked GT matrix and index mapping from sample blocks to a list of corresponding sample IDs. Uses the same sampleblocking logic as the blocked GT matrix transformer.
Requires that:  Each variant row has the same number of values  The number of values per row matches the number of sample IDs
 Parameters
variant_df – The variant DataFrame
sample_ids – The list of sample ID strings
variants_per_block – The number of variants per block
sample_block_count – The number of sample blocks
 Returns
tuple of (blocked GT matrix, index mapping)

glow.wgr.functions.
get_sample_ids
(data: pyspark.sql.dataframe.DataFrame) → List[str][source]¶ Extracts sample IDs from a variant DataFrame, such as one read from PLINK files.
 Requires that the sample IDs:
Are in genotype.sampleId
Are the same across all the variant rows
Are a list of strings
Are nonempty
Are unique
 Parameters
data – The variant DataFrame containing sample IDs
 Returns
list of sample ID strings

glow.wgr.functions.
reshape_for_gwas
(spark: pyspark.sql.session.SparkSession, label_df: pandas.core.frame.DataFrame) → pyspark.sql.dataframe.DataFrame[source]¶ Reshapes a Pandas DataFrame into a Spark DataFrame with a convenient format for Glow’s GWAS functions. This function can handle labels that are either persample or persample and percontig, like those generated by GloWGR’s transform_loco function.
Examples
>>> label_df = pd.DataFrame({'label1': [1, 2], 'label2': [3, 4]}, index=['sample1', 'sample2']) >>> reshaped = reshape_for_gwas(spark, label_df) >>> reshaped.head() Row(label='label1', values=[1, 2])
>>> loco_label_df = pd.DataFrame({'label1': [1, 2], 'label2': [3, 4]}, ... index=pd.MultiIndex.from_tuples([('sample1', 'chr1'), ('sample1', 'chr2')])) >>> reshaped = reshape_for_gwas(spark, loco_label_df) >>> reshaped.head() Row(label='label1', contigName='chr1', values=[1])
 Requires that:
The input label DataFrame is indexed by sample id or by (sample id, contig name)
 Parameters
spark – A Spark session
label_df – A pandas DataFrame containing labels. The Data Frame should either be indexed by sample id or multi indexed by (sample id, contig name). Each column is interpreted as a label.
 Returns
A Spark DataFrame with a convenient format for Glow regression functions. Each row contains the label name, the contig name if provided in the input DataFrame, and an array containing the label value for each sample.
Ridge model¶

class
glow.wgr.linear_model.ridge_model.
RidgeReducer
[source]¶ The RidgeReducer class is intended to reduce the feature space of an N by M block matrix X to an N by P<<M block matrix. This is done by fitting K ridge models within each block of X on one or more target labels, such that a block with L columns to begin with will be reduced to a block with K columns, where each column is the prediction of one ridge model for one target label.

fit
(blockdf: pyspark.sql.dataframe.DataFrame, labeldf: pandas.core.frame.DataFrame, sample_blocks: Dict[str, List[str]], covdf: pandas.core.frame.DataFrame = Empty DataFrame Columns: [] Index: []) → pyspark.sql.dataframe.DataFrame[source]¶ Fits a ridge reducer model, represented by a Spark DataFrame containing coefficients for each of the ridge alpha parameters, for each block in the starting matrix, for each label in the target labels.
 Parameters
blockdf – Spark DataFrame representing the beginning block matrix X
labeldf – Pandas DataFrame containing the target labels used in fitting the ridge models
sample_blocks – Dict containing a mapping of sample_block ID to a list of corresponding sample IDs
covdf – Pandas DataFrame containing covariates to be included in every model in the stacking ensemble (optional).
 Returns
Spark DataFrame containing the model resulting from the fitting routine.

fit_transform
(blockdf: pyspark.sql.dataframe.DataFrame, labeldf: pandas.core.frame.DataFrame, sample_blocks: Dict[str, List[str]], covdf: pandas.core.frame.DataFrame = Empty DataFrame Columns: [] Index: []) → pyspark.sql.dataframe.DataFrame[source]¶ Fits a ridge reducer model with a block matrix, then transforms the matrix using the model.
 Parameters
blockdf – Spark DataFrame representing the beginning block matrix X
labeldf – Pandas DataFrame containing the target labels used in fitting the ridge models
sample_blocks – Dict containing a mapping of sample_block ID to a list of corresponding sample IDs
covdf – Pandas DataFrame containing covariates to be included in every model in the stacking ensemble (optional).
 Returns
Spark DataFrame representing the reduced block matrix

transform
(blockdf: pyspark.sql.dataframe.DataFrame, labeldf: pandas.core.frame.DataFrame, sample_blocks: Dict[str, List[str]], modeldf: pyspark.sql.dataframe.DataFrame, covdf: pandas.core.frame.DataFrame = Empty DataFrame Columns: [] Index: []) → pyspark.sql.dataframe.DataFrame[source]¶ Transforms a starting block matrix to the reduced block matrix, using a reducer model produced by the RidgeReducer fit method.
 Parameters
blockdf – Spark DataFrame representing the beginning block matrix
labeldf – Pandas DataFrame containing the target labels used in fitting the ridge models
sample_blocks – Dict containing a mapping of sample_block ID to a list of corresponding sample IDs
modeldf – Spark DataFrame produced by the RidgeReducer fit method, representing the reducer model
covdf – Pandas DataFrame containing covariates to be included in every model in the stacking ensemble (optional).
 Returns
Spark DataFrame representing the reduced block matrix


class
glow.wgr.linear_model.ridge_model.
RidgeRegression
[source]¶ The RidgeRegression class is used to fit ridge models against one or labels optimized over a provided list of ridge alpha parameters. It is similar in function to RidgeReducer except that whereas RidgeReducer attempts to reduce a starting matrix X to a block matrix of smaller dimension, RidgeRegression is intended to find an optimal model of the form Y_hat ~ XB, where Y_hat is a matrix of one or more predicted labels and B is a matrix of coefficients. The optimal ridge alpha value is chosen for each label by maximizing the average out of fold r2 score.

fit
(blockdf: pyspark.sql.dataframe.DataFrame, labeldf: pandas.core.frame.DataFrame, sample_blocks: Dict[str, List[str]], covdf: pandas.core.frame.DataFrame = Empty DataFrame Columns: [] Index: []) > (<class 'pyspark.sql.dataframe.DataFrame'>, <class 'pyspark.sql.dataframe.DataFrame'>)[source]¶ Fits a ridge regression model, represented by a Spark DataFrame containing coefficients for each of the ridge alpha parameters, for each block in the starting matrix, for each label in the target labels, as well as a Spark DataFrame containing the optimal ridge alpha value for each label.
 Parameters
blockdf – Spark DataFrame representing the beginning block matrix X
labeldf – Pandas DataFrame containing the target labels used in fitting the ridge models
sample_blocks – Dict containing a mapping of sample_block ID to a list of corresponding sample IDs
covdf – Pandas DataFrame containing covariates to be included in every model in the stacking ensemble (optional).
 Returns
Two Spark DataFrames, one containing the model resulting from the fitting routine and one containing the results of the cross validation procedure.

fit_transform
(blockdf: pyspark.sql.dataframe.DataFrame, labeldf: pandas.core.frame.DataFrame, sample_blocks: Dict[str, List[str]], covdf: pandas.core.frame.DataFrame = Empty DataFrame Columns: [] Index: []) → pandas.core.frame.DataFrame[source]¶ Fits a ridge regression model with a block matrix, then transforms the matrix using the model.
 Parameters
blockdf – Spark DataFrame representing the beginning block matrix X
labeldf – Pandas DataFrame containing the target labels used in fitting the ridge models
sample_blocks – Dict containing a mapping of sample_block ID to a list of corresponding sample IDs
covdf – Pandas DataFrame containing covariates to be included in every model in the stacking ensemble (optional).
 Returns
Pandas DataFrame containing prediction y_hat values. The shape and order match labeldf such that the rows are indexed by sample ID and the columns by label. The column types are float64.

transform
(blockdf: pyspark.sql.dataframe.DataFrame, labeldf: pandas.core.frame.DataFrame, sample_blocks: Dict[str, List[str]], modeldf: pyspark.sql.dataframe.DataFrame, cvdf: pyspark.sql.dataframe.DataFrame, covdf: pandas.core.frame.DataFrame = Empty DataFrame Columns: [] Index: []) → pandas.core.frame.DataFrame[source]¶ Generates predictions for the target labels in the provided label DataFrame by applying the model resulting from the RidgeRegression fit method to the starting block matrix.
 Parameters
blockdf – Spark DataFrame representing the beginning block matrix X
labeldf – Pandas DataFrame containing the target labels used in fitting the ridge models
sample_blocks – Dict containing a mapping of sample_block ID to a list of corresponding sample IDs
modeldf – Spark DataFrame produced by the RidgeRegression fit method, representing the reducer model
cvdf – Spark DataFrame produced by the RidgeRegression fit method, containing the results of the cross
routine. (validation) –
covdf – Pandas DataFrame containing covariates to be included in every model in the stacking ensemble (optional).
 Returns
Pandas DataFrame containing prediction y_hat values. The shape and order match labeldf such that the rows are indexed by sample ID and the columns by label. The column types are float64.

transform_loco
(blockdf: pyspark.sql.dataframe.DataFrame, labeldf: pandas.core.frame.DataFrame, sample_blocks: Dict[str, List[str]], modeldf: pyspark.sql.dataframe.DataFrame, cvdf: pyspark.sql.dataframe.DataFrame, covdf: pandas.core.frame.DataFrame = Empty DataFrame Columns: [] Index: [], chromosomes: List[str] = []) → pandas.core.frame.DataFrame[source]¶ Generates predictions for the target labels in the provided label DataFrame by applying the model resulting from the RidgeRegression fit method to the starting block matrix using a leaveonechromosomeout (LOCO) approach.
 Parameters
blockdf – Spark DataFrame representing the beginning block matrix X
labeldf – Pandas DataFrame containing the target labels used in fitting the ridge models
sample_blocks – Dict containing a mapping of sample_block ID to a list of corresponding sample IDs
modeldf – Spark DataFrame produced by the RidgeRegression fit method, representing the reducer model
cvdf – Spark DataFrame produced by the RidgeRegression fit method, containing the results of the cross
routine. (validation) –
covdf – Pandas DataFrame containing covariates to be included in every model in the stacking
ensemble (optional) –
chromosomes – List of chromosomes for which to generate a prediction (optional). If not provided, the
will be inferred from the block matrix. (chromosomes) –
 Returns
Pandas DataFrame containing prediction y_hat values per chromosome. The rows are indexed by sample ID and chromosome; the columns are indexed by label. The column types are float64. The DataFrame is sorted using chromosome as the primary sort key, and sample ID as the secondary sort key.
