Conjoint Analysis¶
Conjoint Analysis estimates preference utilities from survey choices and helps quantify trade-offs between attribute levels (for example: price, brand, speed, service). It is provided by the Conjoint Analysis plugin, which you need to install. Please see Installing plugins.
Instead of asking respondents to rate each feature in isolation, conjoint analysis uses realistic alternatives and infers:
Part-worth utilities for each attribute level.
Relative attribute importance.
Expected preference behavior across offer designs.
This is commonly used for:
Pricing and willingness-to-pay decisions.
Product/package design and feature prioritization.
Offer and portfolio optimization.
Marketing and sales positioning based on quantified drivers.
Inputs¶
Conjoint Choice Dataset: A long-format dataset with one row per alternative in each answer/task.
A task identifier (optional in UI if recoverable from standard column names).
An alternative identifier (optional; alternatives can be auto-generated within each task).
A target column (binary, rating/grade, or ranking).
Conjoint attribute columns (optional in UI; can be inferred).
Output¶
Main Conjoint Output: A dataset combining part-worth utilities and attribute-importance information.
attribute: Attribute name.level: Attribute level.utility/utility_centered: Estimated utility metrics.importance_raw/importance_pct/importance_rank: Importance indicators.
Debug / Validation Output (optional): Diagnostic dataset with model summary, warnings, and chart payloads.
Reports & Visuals Folder (optional): Managed folder containing an HTML report with model/mode explanations, legends, and interactive charts.
Parameters¶
The recipe behavior can be configured with the following parameters.
Dataset mapping¶
Answer ID Column: Task/answer identifier.
Alternative ID Column: Alternative identifier within each task.
Conjoint Attribute Columns: Attribute columns used to estimate utilities. If omitted, attributes are inferred.
Target mapping¶
Target Column: Unified target input for binary, rating/grade, and ranking surveys.
Target Type:
auto: Detects target mode from data.binary: Expects a chosen flag (0/1).rating: Uses highest value as preferred alternative.ranking: Uses ranking values to derive one preferred alternative.
Ranking Preference (when
target_type = ranking):lowest_is_best: Rank 1 is best.highest_is_best: Highest rank is best.
Modeling settings¶
Modeling Strategy:
auto: triespylogitfirst and falls back if needed.pylogit: multinomial-logit-oriented estimation.choicelearn: integration path for choice-learn workflows (current plugin behavior may use fallback).sklearn-proxy: robust baseline fallback estimator.
Advanced parameters¶
Visible when Show Advanced Parameters is enabled.
Enable Debug Output Dataset: If disabled, the debug/validation dataset is written empty.
Random Seed: Reproducibility seed.
Max Iterations: Optimization iterations for frequentist fitting.
Regularization Strength (C): Inverse regularization used by the fallback logistic estimator.
Generate Business Charts: Enables visualization outputs.
Top Attributes in Chart: Limits attribute count in importance chart.
Write HTML Report to Managed Folder: Exports report to managed folder output.
Modeling strategies¶
auto: tries pylogit, then falls back.pylogit(homepage): closest to classical discrete-choice conjoint interpretation.choicelearn(homepage): integration-oriented strategy for richer choice-learning setups.sklearn-proxy(scikit-learn): deterministic and robust baseline/fallback.
Detailed modeling differences¶
pylogit:Best aligned with standard conjoint/discrete-choice estimation.
Most sensitive to strict task structure and data quality.
Recommended for primary conjoint interpretation when data is well-formed.
choicelearn:Extension path toward richer choice-learning modelization.
Useful when teams plan to deepen native choice-learn integration.
In this plugin version, fallback behavior may apply for compatibility.
sklearn-proxy:Most robust option when data is imperfect.
Useful for baseline benchmarking, sanity checks, and reliable fallback runs.
Less canonical than dedicated discrete-choice estimators.
How to read results¶
Part-worth utilities:
Higher utility indicates stronger relative preference.
Utility is comparative (not an absolute KPI).
Attribute importance:
Shows contribution of each attribute to preference variation.
Supports prioritization across product, pricing, and messaging.
Model quality metrics:
Provide confidence signals for business communication and governance.