A new dataset for automated driving, which is the subject matter of this paper, identifies and addresses a gap in existing similar perception data sets. While the most state-of-the-art perception data sets primarily focus on provision of various on-board sensor measurements along with the semantic information under various driving conditions, the provided information is often insufficient since the object list and position data provided include unknown and time-varying errors. The current paper and the associated data-set describes the first publicly available perception measurement data that include not only the on-board sensor information from camera, Lidar and radar with semantically classified objects, but also the high precision ground-truth position measurements enabled by the accurate RTK assisted GPS localization systems available on both the ego vehicle and the dynamic target objects. This paper provides insight on the capturing of the data, explicitly explaining the meta data structure and the content, as well as the potential application examples where it has been, and can potentially be, applied and implemented in relation to automated driving and environmental perception systems development, testing and validation.