The most powerful feature of the Open Learning Initiative’s web-based instruction is that it allows you as a course author to turn into every instructional activity into a source of insight into how well students are learning.

The data provided by the interactions of students with instructional activities allows the OLI course editor to make corrections, suggestions, and cues that are tailored to an individual student’s current performance and gives you an unprecedented opportunity to stay in tune with many aspects of your students’ learning.

Now, with the release of version 26 of the course editor, incorporating data-driven course design is easier than ever. For the first time, student data is embedded right into the course editor alongside the course content itself.

This data is meant to drive course improvement by highlighting skills and assessments that students are struggling with. The most important information is presented in the course editor itself, and the expanded set of course data is available in Excel-format by download.

The course editor provides a core set of data alongside the course content:

**Number of attempts**

The “number of attempts” represents the total number of interactions over all students for the selected assessment or skill. The same student can answer a question multiple times, with correct and incorrect choices.

**Relative difficulty**

The “relative difficulty” represents an estimate for the difficulty of the question based on how many students are getting the question wrong, with a higher number meaning a more difficult question. The ratio is calculated by this formula:

Relative difficulty = (# Hints Requested + # Incorrect Answers) / (# Hints Requested + # Incorrect Answers + # Correct Answers)

**Eventually correct**

The “eventually correct” statistic represents the percentage of students who eventually answer the question correctly, even if it takes multiple attempts.

**First try correct**

Similarly, the “first try correct” statistic represents the percentage of students who answer the question correctly on their first try - subsequent attempts are ignored.

The dataset download contains the full set of course data intended to guide course improvement. When you download the full dataset, you will see three spreadsheet tabs. Each of these tabs represent the same data but aggregated in different ways.

**byResource**- Course data aggregated by the assessment (which contains a set of questions)

**bySkill**- Course data aggregated by skill (a skill may be attached to multiple questions throughout a course)

**byPart**- Course data aggregated by question part (most questions have a single part and so shows data for the question itself, but some question types like fill in the blank have multiple parts per question)

Each data point in the analytics dataset represents a different statistic which represents course content effectiveness or student engagement with the content.

**Distinct Students**The number of students enrolled in the administered course sections for the dataset

**Distinct Registrations**The number of registrations in the administered course sections for the dataset. Distinct registrations may be higher than distinct students because the same may be registered across multiple sections

**Opportunities**The number of “opportunities” for a student to answer a question, skill, or resource, counted by the total number of question parts. Most questions have one part, but some, such as fill in the blank, may have more than one part.

**Practice**The number of student interactions with a particular question, skill, or resource. A student may answer the same question more than once if the assessment allows it.

**Hints**The number of hints requested by students

**Errors**The number of incorrect answers by students

**Correct**The number of correct answers by students

**First Response Correct**The number of times a student answered the question correctly on the first try. Subsequent attempts are ignored.

**Utilization Start**The percentage of students registered for a course who attempt the problem.

Utilization Start = Practice / (Distinct Registrations * Opportunities)

**Utilization Finish**The percentage of students registered for a course who complete the problem

Utilization Finish = Correct / (Distinct Registrations * Opportunities)

**Average Help Needed**This is the “relative difficulty” statistic in the course editor. It represents an estimate for the difficulty of the question based on how many students are getting the question wrong, with a higher number meaning a more difficult question. The ratio is calculated by this formula:

Average Help Needed = (# Hints Requested + # Incorrect Answers) / (# Hints Requested + # Incorrect Answers + # Correct Answers)

**Average Number of Tries**The average number of times a student attempts the problem

Average Number of Tries = (Errors + Correct) / Practice

**Completion Rate**The number of students who eventually answer this problem correctly once they attempt it

Completion Rate = Correct / Practice

**Accuracy Rate**The percentage of time students answer this question correctly on their first try

Accuracy Rate = First Response Correct / Practice