Here are a few tips to properly optimize a star schema inside Power BI when dealing with high-cardinality dimensions:
Reduce Cardinality in Dimensions: These high-cardinality dimensions, such as a customer or, for that matter, a product table with millions of unique values, can degrade performance. You can look into:
Pre-aggregating data: Creating hierarchically structured tables, for example, grouping customers by region.
Using surrogate keys: Instead of using a text-based key (such as an email or product name) that can slow down joins, use an integer-based key.
Optimizing Design of Fact Tables:
Keep the fact table narrow, removing unwanted columns.
Only store surrogate keys, in fact, tables for dimension relationships instead of repeating large descriptive values.
Consider pre-aggregating transactional data to bring down the row count.
Efficient Relationship Management:
Use single-directional relationships whenever possible as they offer better performance to users' queries.
Avoid many-to-many relationships since they enforce complexity and slow down querying.
Snowflake the design only when it actually reduces duplication without introducing too many joins.
Column encoding and compression:
Power BI can better compress low-cardinality columns. Therefore, convert high-cardinality text fields into numerical codes.
Remove any unnecessary columns from the dimensions which are not needed for reporting.
Partition and Incremental Refresh:
Whenever high-cardinality dimensions are expected to grow in size over time (for example, customers), incremental refresh is set up to only load new data in place of reloading the full data set.
By implementing these tactics above, one will be able to have an optimized star schema for better performance in Power BI and thus prevent the delays faced for the reports even with large sizes and high cardinality dimensions.