Methodology
Benchmark methodology
The benchmark layer should never behave like a black box. Operators need to understand what data is being compared, how it is cleaned, and why a range is being labeled as a benchmark rather than an opinion.
1. Source labeling
Every benchmark note should make source type visible. Ashmo uses four source labels: Official UAE Data, Licensed Market Research, Ashmo Operator Benchmark, and Model Assumption. These are not cosmetic tags. They explain the confidence level of the number.
2. Submission standard
No operator submission should enter the benchmark layer without a minimum data structure. At a minimum, the platform should ask for monthly sales, labor summary, rent or occupancy cost, COGS summary, and delivery channel mix. If the file structure is incomplete, the submission should not distort the range.
3. Validation
Validation should happen in three steps: structural checks, reasonableness checks, and flagged manual review for suspicious or unusual submissions. This protects the benchmark layer from poor-quality data and from operators uploading numbers that are too incomplete to trust.
4. Anonymization
No single operator should ever be identifiable inside a published range. Benchmarks should only be released after concept buckets, geography groups, and sample thresholds are met. Raw operator files should never be exposed publicly.
5. Sample threshold and outliers
A range should not be published below a minimum sample threshold. Outliers should be quarantined first, then reviewed. Some outliers may be excluded from ranges and later turned into blind case studies because they are more valuable as insight than as benchmark noise.
6. Reporting cadence
Benchmark notes should show period covered and refresh cadence. The value of a range is not just the number. It is the clarity around whether that number reflects a recent operating window or an outdated one.