Cohort Insights is a customizable analytics tool that unlocks actionable insights about your buildings energy and water usage as well as carbon production to help improve building performance.
This is done through custom peer group comparisons. Peer groups, also known as cohorts, are groups of buildings that can be defined using a wide variety of building characteristics including: building type, size, location, and data coverage.
Please refer here for steps on how to navigate Cohort Insights.
Please view our Cohort Insights methodology article here.
Frequently Asked Questions
Who is Cohort Insights available to?
- Cohort Insights is available to Portfolio Managers, Portfolio Members, Subgroup Managers, and Subgroup Members through the Premium Subscription Tier. For more information on the Premium Subscription Tier, please reach out to your Account Manager.
Can cohorts be saved?
- For Version 1 one of Cohort Insights (Released Dec. 18th), no.
- For Version 2 of Cohort Insights (Released ETA Jan. 2021), yes.
Will Cohort Insights reveal my data to other Measurabl customers?
- No. There are measures in-place to ensure your data is never exposed.
NOTE: As a requirement, Cohort Insights has set thresholds for the minimum number of buildings allowed in a cohort and the minimum number of foreign buildings allowed in a cohort. This ensures data privacy is satisfied. Cohort Insights will never expose the exact size of a cohort, either. Cohort sizes are displayed in ranges such as 100-499 Buildings or <100 Buildings. The only exception to this rule is when you choose to create a cohort of only buildings in your portfolio.
How is Data Coverage Calculated?
- The Measurabl Data Coverage combines floor area coverage and meter data completeness within a single metric by taking into account the space area-weighted data completeness contributions of each meter within a building. In the case when buildings are not broken down into spaces, the data coverage is the averaged meter data completeness of all building-level meters over the selected time frame.
- For each meter and a given timeframe (e.g., last calendar year), we consider the number of months of meter data present for each meter out of the total number of months in the timeframe (e.g., 12 months, or the number of months when the meter was active), to calculate a meter-level data completeness. Whole building-level meters and common-area meters are allocated to spaces by floor area.
Why isn’t the Measurabl Efficiency Percentile based on usage intensities?
- Usage intensity is usage divided by floor area. It is correlated with efficiency but does not tell the whole story. There are other variables which contribute to usage aside from floor area - weather, property use type, seasonality, who pays the bill - to name a few, all of which are included in the Measurabl Efficiency Percentile.
How are data outliers handled with Cohort Insights?
- Outliers are excluded from the data used to generate the Cohort Insights machine learning models. This approach prevents outliers from having an impact on the baseline usage. Since there is no general way to tell the difference between an outlier that’s an atypical usage which is correct, and bad data, all building usages are included when a cohort is created.
How is the expected usage calculated when there are mixed-use sites? How does it roll up to the property level?
- As energy is often reported at the space level, we calculate expected monthly energy usage for each space in Measurabl. This allows us to calculate expected energy usage with a higher accuracy for buildings of mixed use. The space-level expected usages are rolled up to calculate a building-level expected monthly energy usage.
Are percentiles a common methodology used for comparative analytics?
- Yes. Leveraging percentiles to represent one’s rank within a dataset is common. However, the means for which they are ranked, in our case that is the expected usage, is highly proprietary to Measurabl.
What is the difference between Measurabl’s ‘Peer Benchmark’ and Cohort Insights?
- Peer Benchmark was Measurabl’s previous method of comparing your portfolio and buildings to others. The Measurabl Peer Benchmark will be removed from the application in early Q1 and replaced with Cohort Insights. Cohort insights has an improved, more transparent methodology and a customizable approach to benchmarking buildings. From a technical standpoint:
- The Machine Learning models are different - Cohort Insights is better aligned with the vision of a fair efficiency baseline.
- Data outliers are excluded from the training data for Cohort tInsights whereas outliers were included in the training data for the Peer Benchmark.
- Cohort Insights also now only uses data points with full data coverage to train on. The Peer Benchmark trained on partial data.
- Expected usages are calculated at the space-level for energy and carbon metrics whereas the Peer Benchmark looked at building-level expected usages for the energy and carbon metrics.
Why is it that when I select “Last 12 Months” as my time period, the date range is the last 12 months starting two months ago?
- On average, more than half of utility data comes in at least 6 months after the end usage date. About one third of data is in after 2 months just slightly more after 3 months. There is a built-in 2-month lag to account for this data entry behavior. There would not be sufficient data for meaningful comparisons within the first 2 months of data entry because the comparison data set would only include less than 1/3 of the Measurabl database.
Any other questions that weren't answered here? Please reach out to our Support Team!