Learning and Evaluation/Evaluation reports/2015/GLAM/Limitations
Response rates, data quality, report limitations
edit
|
GLAM data for this report were collected from three sources:
The data obtained included: number of participants, implementation start and end times, number of media files uploaded, number of unique media used, and ratings of image quality (Featured, Quality, and Valued.) Where start or end times were not reported, we estimated the dates from the first or last day a media was uploaded to an implementation category. Only a minority of implementations reported key inputs such as budget, staff hours, and volunteer hours, and this information cannot be mined. Thus, while we explore these inputs, we cannot draw many strong conclusions about program scale or how these inputs affect program success. In addition, the data for GLAM implementations are not normally distributed. This is partly due to small sample size and partly to natural variation, but does not allow for comparison of means or analyses that require normal distributions. Instead, we present the median and ranges of metrics and use the term average to refer to the median average, since the median is a more statistically robust average than the arithmetic mean. To give a complete picture of the distribution of data, we include the means and standard deviations as references. To see the summary statistics of data reported and mined, including counts, sums, medians, arithmetic means, and standard deviations, see the appendix. | |
Priority goals
edit
|
To learn about which program goals were important to GLAM program leaders, we asked them to share their priority goals for each GLAM implementation.[3] Four GLAM organizers reported priority goals for nine implementations. The number of priority goals selected ranged from 4 to 10; the average number selected was nine.[4][5] The table below presents the priority goals selected by GLAM program leaders, listed by frequency of selection.
|