Alyssa Johansen

Alyssa JohansenAlyssa JohansenAlyssa Johansen

A Wine Quality Analytics Dashboard

01 · The Brief

Translating chemical wine data into something a winery can actually use


The Viticulture Commission of the Vinho Verde Region (CVRVV) certifies and promotes wines from northern Portugal — but their data was sitting in raw form. Our brief was to act as consultants to CVRVV: analyze 6,497 Vinho Verde wine samples across 11 physicochemical properties, identify what actually drives quality ratings, and build a dashboard that local wineries could use to improve their products.


The core questions

What factors most influence wine quality? Can we predict quality? Which properties don't matter? And do those answers differ between red and white wines? The dashboard had to answer all four — clearly enough that a winery owner without a data background could act on the findings.

02 · The Dataset

6,497 wines, 11 properties, one quality score

The dataset, collected by Cortez et al. (2009) from CVRVV between 2004–2007, contained 4,898 white and 1,599 red Vinho Verde samples — each with 11 physicochemical measurements (alcohol, volatile acidity, density, chlorides, pH, sulphates, etc.) and a quality rating from human tasters on a 0–10 scale.


Ethical considerations we flagged

The dataset had important limitations: no data on grape type, brand, or price; a significant class imbalance (far more average wines than excellent or poor ones); and inherent subjectivity in human taster ratings. We documented these openly as constraints on how findings should be applied.

03 · The Process

From EDA to prototype to live product

1

Stakeholder framing & EDA

Defined the consulting brief, stakeholder goals, and research questions. Conducted exploratory data analysis to identify the key predictors of quality — alcohol content emerged as the strongest, with volatile acidity and density also playing significant roles.

2

Figma prototype

Designed a high-fidelity interactive prototype in Figma before writing any code. The prototype included quality histograms, correlation visualizations, a property guide, and filters for wine type — structured to answer the four core research questions in order.

3

User testing (n=7, UEQ)

Ran a structured usability study with 7 participants using the Figma prototype. Participants completed directed tasks (answering the four dashboard questions via multiple choice to measure task accuracy) and the 26-item User Experience Questionnaire. Collected both quantitative UEQ scores and open-ended qualitative feedback.

4

Iteration based on findings

User testing surfaced three clear improvement areas — UI consistency, navigation bugs, and an underdeveloped property guide. We used these findings to prioritize what to address before moving into Streamlit development.

5

Streamlit deployment

Built and deployed the dashboard as a live Streamlit app — moving from the Figma prototype into a coded, interactive product. The Streamlit environment resolved the UI consistency and navigation issues flagged in testing that were artifacts of manual Figma prototyping.

04 · User Testing Findings

What 7 participants told us about the prototype


UI consistency

Users noticed layout variations between tabs and one navigation bug that "locked" them when scrolling — requiring a click on "All Wines" to reset. Positioning inconsistencies across summary views were flagged for polish.


Property guide: biggest win

The property guide — which explained what each physicochemical property means — was the most praised feature. Users called it "very informational" and said it was essential for understanding the data without wine expertise.


Visualization clarity

The correlation chart ordering (most positively correlated → neutral → most negatively correlated) landed well. Some users wanted side-by-side graph views and found it hard to compare across tabs from memory.


Color palette praised

Aesthetic reception was positive — "I love the color palette" — suggesting the visual design choices were appropriate and trustworthy for a wine industry stakeholder context.

"The property guide is great — very informational for someone like me who knows nothing about alcohol quality."

"I had to tab between the two screens multiple times because it was hard to remember what I was looking at."

"You have made sure to provide as much information as possible."


How testing shaped the final product

Three priorities emerged from synthesis: expand the property guide to cover all 11 physicochemical properties (prototype only covered 3); fix navigation and layout consistency; improve cross-tab comparison. The move from Figma to Streamlit naturally resolved the consistency bugs, while the property guide expansion became a deliberate design priority in the coded version.

05 · What the Data Said

Alcohol is the strongest predictor — but red and white wines differ


The analysis revealed a clear hierarchy of quality predictors. Alcohol content was the dominant factor for both red and white wines, with higher alcohol correlating strongly with higher quality ratings. Volatile acidity and density also mattered — but in different ways for red vs. white wines, supporting separate guidance for each wine type.

Several properties — fixed acidity, citric acid, residual sugar, chlorides, free sulfur dioxide, total sulfur dioxide, and pH — showed limited impact on quality, giving wineries a clearer picture of where to focus their optimization efforts.

06 · What I Learned

Data products are UX products


- A dashboard is only as good as the decisions it enables. The most important design question wasn't "how do we visualize this data" — it was "what does a winery owner need to walk away knowing?" That stakeholder framing changed everything about how we structured the information.

- Prototype before you build. Testing the Figma version before writing Streamlit code saved us from building the wrong thing — the user testing surfaced the property guide gap and navigation issues early, when they were easy to fix.

- The UEQ is a powerful mixed-methods tool. Combining quantitative UEQ scores with open-ended feedback gave us both the signal (something is off with navigation) and the story (here's exactly what the user experienced). Neither alone would have been as actionable.

- Ethical transparency builds trust. Proactively documenting the dataset's limitations — class imbalance, subjective tasters, missing variables — made the analysis more credible, not less. Stakeholders trust findings more when they can see the caveats.


Copyright © 2026 Alyssa Johansen - All Rights Reserved.

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept