Data Modeling/Ingestion/Report Building
Need help moving data into the Power BI ecosystem then making sense out of it—that is, ingesting/importing it, structing it in a way that aligns with how Power BI works, then developing reports that derive actionable insights from the cleanly-modeled data?
Does your team have Power BI skills but could use help mastering some of its more advanced concepts?
“On Retainer” Resource
Would it help if your BI team had access to a technical resource familiar with your company’s environment which they can bounce questions to on an as-needed basis?
Perhaps you have an existing set of reports—dozens of similar-but-not-quite-the-same reports that are costly and confusing to maintain. Would you like your Power BI environment standardized in a way that is both self-service BI and corporate governance friendly?
Maybe Ben can help!
Ben is a senior data expert, with years of experience helping companies in business sectors such as education, manufacturing, energy, publishing and government. He’s both an architect and a developer, able to plan/design/coordinate as well as implement solutions. In the Power BI realm, he’s known as the author of the Power Query M Primer series, an in-depth exploration of the language that powers Power BI’s data ingestion features.
Sound interesting? Drop Ben a line.
Project Success Example
Ben guided an international online education company in combining dozens of often similar reports into a small handful of reports, each with a unique focus. Previously, almost every report had its own dataset; for the new clean world, a single common shared dataset was modeled and implemented, ensuring that each report uses the same logic and data model.
- Users now work with a much simpler yet more powerful set of reports.
- Company avoids the cost of maintaining the previous large set of reports. Logic and data definition changes are now made in one place vs. needing to be manually applied to dozens of datasets.
- All reports display computations calculated the same way. In the previous world, the presence of dozens of separate datasets allowed the potential for logic variations to creep in.
- Row-level data security is consistently applied regardless of which report accessed, as it is defined in, and enforced by, the single shared dataset.
- Database server load is less. Instead of dozens of datasets performing scheduled refreshes throughout the day to pull similar data, a single dataset’s scheduled refreshes pull all needed data.