Optimization is now the name of the game when it comes to Power Pivot models in large transactional databases; it requires careful strategic planning and implementation. Start by reducing memory overload by dropping unnecessary columns and rows, minimizing calculated columns and substituting with measures where possible, and compressing the data enough by setting appropriate data types, e.g., integers, for instance, instead of decimals for numeric values. All these would help significantly reduce the model size.
Then, index your data efficiently by sorting and grouping similar entries before you load them into Power Pivot. It reduces overhead when doing queries and gives faster performance. Set relationships between your tables in a way that you minimize, as much as possible, any many-to-many relationships. Prefer star schema models to snowflake models for the ease of complexity amongst the relationships and performance improvement of queries.
Use aggregation techniques such as aggregating data to higher levels, from daily totals to monthly totals, where necessary, as long as you don't need excessive granularity. Pre-aggregate using tools such as Power Query before loading into Power Pivot. Using such techniques, you could comfortably handle millions of rows without suffering performance.