MikeKD Posted December 15, 2014 Posted December 15, 2014 Hi folks, My music dept database is used to record grade % and attendance in class lessons and attendance in ensembles (choirs, orchestras etc). It includes following tables: Pupils Inculdes Overall_avg, This_academic_Yr_Avg, Age, MusicScholar, AcademicScholar, SpecialNeeds (a text field describing dyslexia etc) fields amonst others. Assessments The table for generic info about assessments - so different classes can use the same assessment (e.g. Y7 end of year exam etc) Class With class_avg, Academic_Year, Year_Group, Class_Attndance_Avg, Lesson_Day, Lesson_Time Assessment_Class_Join Self explanatory? I can't think of a more useful name at the moment, but am working on it!! Pupil_Class_Assess_Join This is the table where the individual record of present, absent, grade%, notes and date go for individual pupils. I ought to rename it but haven't been inspired with a better name yet - MarkSheet? any better ideas?!! Any road up, I'm getting to the stage where I've got easy access to lots of data which makes writing end of term reports far easier (I've got those in another separate table), but there's potential a whole lot more I can do with this data: For instance (getting harder!): Could I have a notification pop-up when I open a class to inform me which kids are under / over-achieving (or if any have birthdays today!) Could I somehow spot trends of over / under achievers. - do scholars do better? Kids born in October? Girls? (actually, they will - we're a girls school!) Violinists? Could I predict likely potential future achievement in public exams based on previous cohorts? Any obvious opportunities I've missed? And then - what's the best way to present this? A dedicated data crunching layout A spot on pupils layout On the class layout Sorry to be so nebulous - but I really have no idea what I'm talking about; what's possible what's not, and how to achieve the possible!! Cheers, Mike
jbante Posted December 16, 2014 Posted December 16, 2014 Almost all data analyses are done for one of two reason: knowledge, and informing decisions. I'm guessing that you're in the latter category. I presume that your broader goal is to improve outcomes for your students. I have some questions for you that may help clarify your approach or stimulate some more ideas: What outcomes matter? Think beyond just what you happen to have data for at the moment. Good grades are nice, of course, but what about being accepted to music schools? The prestige of music schools students get accepted to? Academic performance outside of music? Extra-curricular music activity? Future salary? General quality of life? Requests from new students to participate in your music department? In instrument and part sections you need to fill? How can you measure the outcomes that matter? What are you able to manipulate about your students' lives? What interventions are possible? What are you willing to manipulate to hopefully achieve uncertain outcomes? Are you really going to do what the results of an analysis suggest is best? If an analysis tells you that an underachieving student is unlikely to be helped by an intervention, are you comfortable with the thought of neglecting a possible diamond in the rough in case the analysis turns out to be wrong?
MikeKD Posted December 16, 2014 Author Posted December 16, 2014 Hi jbante, thanks for coming up with such penetrative questions!! Here's my (slightly alcohol fuelled!) answers: What outcomes matter? although ultimately it is the final destination that is most important, in these last years of the girls musical education with us; we've got to know them so well that our gut instinct and experience is probably much more useful than data. having said that, if the data could confirm or contradict our gut feel, either helps stimulate useful discussion and can be useful in advising student or teacher on the way forward. getting the highest possible achievement from the first 3 years of senior school so that all girls have music as a genuine option for the next stage. How can you measure the outcomes that matter? It's easy to collate and record the next destinations of pupils. Making that measurable might mean some kind of ranking order for colleges and universities - i.e. is Oxford / Cambridge better than Royal College of Music? The public exams in Year 11 and Year 13 give objective numerical results. We do internal exams in Years 7-9 that are objective, but only measure listening skills and knowledge learnt. We do frequent assessments on performing and composing that we try to make as objective as possible. What are you able to manipulate about your students' lives? What interventions are possible? We can persuade a girl to join a choir or band. We can provide extra lessons in specific areas. We can bring in external experts to address dept wide weaknesses (or strengths). We can arrange meetings with pupils / parents. We can reward and punish. We can recommend a girl to take up / give up an instrument. We can recommend that a girl opts to take music / give up music at the next stage of her education. What are you willing to manipulate to hopefully achieve uncertain outcomes? Are you really going to do what the results of an analysis suggest is best? If an analysis tells you that an underachieving student is unlikely to be helped by an intervention, are you comfortable with the thought of neglecting a possible diamond in the rough in case the analysis turns out to be wrong? We are a relatively small school (470 girls in the senior school) and know our pupils very well. We're at the top end of private schools and jump through any possible hoop to help our pupils get where the want to be even if it seems unachievable. I don't think that will change. What then am I hoping the data will give us? School management and inspectorate live their lives crunching data in excel. Musicians have a reputation for being creative and disorganised; to hold our own in a world where the arts are marginalised it's important to be able to provide data. I'm hoping that this will be like an extra voice or member or staff; looking out for pupils whose amazing or worrying achievements might otherwise go under the radar. Cheers! Mike
jbante Posted December 17, 2014 Posted December 17, 2014 The most obvious action you could take based on your data is identify underachievers and overachievers to prioritize actions, but you already have the data and techniques to do that: just sort students by their grades, start at one end of the list, and work your way through the list in order. Otherwise, "under/overachievement" is uncomfortably vague. It could just be students whose grades happen to fall above and below particular extreme quantiles of the distribution of students, but I personally think that's the wrong way to approach it. You want to know who the under/overachievers are so you can do something about it, something that avoids some outcomes and leads to some other outcomes. This suggests that there's some less desirable outcome that you expect to happen if you don't change what you're doing. An under/overachieving student is a pupil whose current trajectory leads to a less desirable outcome than is possible or likely with some intervention. When I put it like that, it sounds like every student to me, and the issue becomes one of matching students with interventions rather than ranking them at the tails of a distribution of achievement. The broader arc of my questions was to lead you towards the data you'd have to record to be able to establish causal relationships between your actions and outcomes. Positively identifying causal relationships requires deliberate experimentation. Since inaction is not a reasonable option, that constrains what kind of experimentation is possible. You don't have the option of comparing action to inaction. You might still have the option of comparing one action to another, which you can use to prefer more effective interventions over less effective ones, or, since you have students for a long time, to match certain interventions to certain students. This is often a more valuable experimental method anyway. Outside of deliberate experiments, all you have left in your data is correlations. The presence of a correlation can't prove the presence or direction of causation, but the absence of correlation is reasonable evidence against a causal relationship. Also, although the presence of a correlation doesn't prove causation, a strong enough correlation may be actionable anyway. The arrow of time can also be helpful here: if B and A are strongly correlated, and B always happens after A, we can be pretty confident B doesn't cause A. although ultimately it is the final destination that is most important, in these last years of the girls musical education with us; we've got to know them so well that our gut instinct and experience is probably much more useful than data Your experience is data. You just need a systematic way to record that experience, ideally as it happens. There's no reason teacher assessments can't be just another variable, no different from grades and attendance. It may seem less objective, but teachers have access to information we may not think to or be able to document in a database, and may know before our analysis does what information is important. Both of those factors, quantity of information and accurate assessment of its quality, can be very important. Further, if data are supposed to confirm or or deny teacher assessments, you won't know which it is if you don't have the teacher assessments! It's easy to collate and record the next destinations of pupils. Making that measurable might mean some kind of ranking order for colleges and universities - i.e. is Oxford / Cambridge better than Royal College of Music? The public exams in Year 11 and Year 13 give objective numerical results. We do internal exams in Years 7-9 that are objective, but only measure listening skills and knowledge learnt. We do frequent assessments on performing and composing that we try to make as objective as possible. Do you already record where pupils wind up? How? How comprehensive is that data? I'm concerned about potential response bias — if the only students who respond are students who do well, or something else like that, you won't have representative data. If there's any reason to doubt the quality of your data on what schools students get accepted to, it may be worth looking at proxy measurements instead, mainly the admissions criteria used by the schools. Grades (which you have), standardized test scores (you have the year 11 and 13 exam scores, but are there any college admissions exams used, and what access do you have to the results?), and auditions seem like obvious criteria. I'm sure there are others. How closely does the format of your performance and composition assessments match the university auditions? Consider that "objectivity" may be less valuable than fidelity if you want to use the assessments as proxy measurements for college admissions prospects. Ranking order is not strictly necessary for a meaningful outcome measurement. There are plenty of useful boolean outcomes. Was a student accepted to any college? Was a student accepted to any college's music department? Did a student actually enroll, or even apply? (This last one strikes me as the kind of thing that may be the students' prerogative and not necessarily any of your business, but I suppose it might be a measure of how engaged the student was with music at your school, or the student's confidence in their prospects.) If you think destination quality is appropriate to include in your analysis, first, I think "rank order" is the right way to think about it, as opposed to a more contrived quality index. Viewing quality as an ordinal ranking instead of a continuous index will require different statistical methods, which, as a bonus, are a little more difficult to misapply. However, I'd suggest that an overall ranking my be less valuable than individual rankings for each student. What is each student's order of preference for the schools they applied to? Was each student accepted to their individual top-choice school, or somewhere else in their individual ranking? 2
MikeKD Posted December 18, 2014 Author Posted December 18, 2014 Thanks again jbante, At some stage I'm going to have to re-read and go into more depth about your post, but it has flagged the following urgent points in my mind: A pure ranking order doesn't necessarily identify students who are over-achieving or under-achieving. I also need a way to predict what they should be able to achieve and then spot if they're doing significantly better or worse than that. Which means I need better ways to measure pupils potential - we have a baseline test, but these are notoriously unreliable, I also somehow need to include things like choir membership, skill with music software. Also external grade exam results. Lots to ponder here - it's great having such high quality comments - I'm very grateful!!
Recommended Posts
This topic is 3626 days old. Please don't post here. Open a new topic instead.
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now