Tough tests serve a positive purpose

A week or so ago, parents wrote to the press complaining that the Primary Six exam’s math paper had questions that were unusually difficult. The Ministry of Education said this year’s exam was pegged to the same difficulty level as previous years. So, what’s behind the complaints? Full essay.

24 Responses to “Tough tests serve a positive purpose”


  1. 1 Mama Lee 27 October 2009 at 18:01

    I keep tabs on what’s happening in the education system as I have a child in Sec 2 and one in P5 this year. I also have many nephews and nieces, and friends with kids, who have gone through or will be taking the PSLE exams.

    I worked closedly with my elder child during her P5 and P6 years and now am doing the same for my P5 kid. For sure, the syllabus for Math remain the same but the questions have gotten progressively harder over the last 3 years. For this year’s Math paper, I am told that many top students have difficulty doing the last 5 questions of Paper 2 (which carries up to 5 marks each). These are top students. What of the average child or even above average child? Many are totally demoralised.

    I feel that it is a question of fairness. I agree that a paper must have a range of questions, so that the average child can do most of the questions and there must be 2 or 3 really difficult ones to spot the top scorers.

    But it appears that SEAB has up the ante each year and the number of difficult questions and the level of difficulty of the questions have increased progressively. Okay, so I may be biased, because I have provided only personal observations and no concrete evidence for these comments.

    But what about the SEAB’s statements? It states that it takes “careful consideration” to ensure “the questions are within the respective syllabus and within the pupil’s abilities and experiences”.

    SEAB also said that there are “processes” in place to “calibrate the difficulty level of each question and to control the overall standard of the paper”.

    But these are general statements. Are we able to challenge them? Of course not, because the full set of exam questions for each year is not available for comparison. In the absence of relevant facts, the SEAB’s general defensive statements should not accepted wholesale.

    The SEAB should operate with transparency and fairness. A general statement from the SEAB that the exams are carefully calibrated do not make the exams fair.

    The general feeling amongst parents, teachers and principles (who will not dare to speak up) is that the exams are increasingly difficult and unfair. The only people benefitting from this system are the private education industry which is raking in pots of gold helping students prepare and second guess what impossible math / IQ question will be asked in the PSLE.

  2. 2 Avoiding the answers? 27 October 2009 at 22:03

    Regarding the second part of your article, I’m not sure whether your response to the interviewers was born from the intent to stimulate them into thinking harder about their questions, or whether you were simply being difficult and particularly obnoxious towards them. You were an interviewee, not their professor.

    Their project does not seem to have merely been about the act of hiring gay people, but also about their (continued) employment. However, even if it had just been the former, sexual orientation can indeed often be clear or inferred from questionnaire and interview questions if a person is open and honest in their replies, so I think you may have been misguiding them.

    What about questions such as “marital status”? If you have to reply “civil partnership/union”, it is immediately obvious what your sexuality is, and if you are single and over a certain age, some people (rightly or wrongly), may infer it, just as they may if the person is an effeminate male or masculine woman (again, rightly or wrongly). Discrimination can also be against perceived sexual orientation; it doesn’t have to be an accurate perception.

    As for continued employment, it will become apparent at some stage what a person’s sexual orientation is unless the person feels obliged to hide it in a way that his/her heterosexual colleagues and friends in the workplace do not. The mark of a good employer is that it makes no difference to their employment and promotion prospects and they are judged according to their ability and performance.

    • 3 yawningbread 28 October 2009 at 22:43

      The question that I was describing was very specifically focussed on the hiring decision. Further into the interview, the discussion moved into other aspects of employer : gay-employee relationships… this larger part of the interview was not mentioned in my article.

      And how many people would put “civil union” in a marital status checkbox?

      • 4 Avoiding the answers? 29 October 2009 at 01:20

        “And how many people would put “civil union” in a marital status checkbox?”

        Anyone who’s in one if they’re honest. They are not “single”, and it is their marital status.

        Sorry, my previous post was a bit OTT.

  3. 5 raelynn 27 October 2009 at 22:16

    while i dont disagree with SEAB setting difficult questions in each paper, having been through both the conventional “moderation” PSLE O and A levels, and currently going through UOL’s external exam where there is no moderation, i would say that indeed PSLE O and A levels require more transparency.

    allow me to first explain how is the UOL external exam system. right at the start of the degree, we are aware of what are the marks that will enable use to receive a certain honours degree classification for that subject, and then subsequently what has to be our aggregate marks and number of awards needed for our university degree to be that honours degree. there is constant reminder that there is no moderation.

    to be fair for comparison, i’ll use math papers as an reference. the papers are completely within syllabus, but there are still the tough questions that need you to recognise that the question is acually asking you to apply a certain model or a certain formula to solve, i.e need you to fully understand the concept of the syllabus.

    why i say that there is no moderation, is because you do get marks such as 28, 34, 35, 39, 69, marks that are “in between” each classification for subjects like math. of course, there will be those that can get full marks, but it is by no means an easy feat that can be achieved just by practicing the “10 year series”, it requires true and proper understanding of the syllabus.

    perhaps it is better if SEAB can be more transparent with the exams by being more transparent with their moderation process. For example, what is the marks range for students that obtained Band 1 in that academic year. if let’s say the marks range for a math paper is 77 to 90 ( i’m just pulling numbers out of the air), i would say that the paper is alright in terms of difficulty. but if it was 70 to 80, well then there’s definately something wrong isnt it, that the top scorer can only score 80 marks for it. it is not to say that the exam paper is to be easier. but rather setting questions that require application of concept, and encourage teachers to go into better detail of topics and their concepts rather than rushing through the syllabus and then just drilling them with papers.

    this suggestion was criticized by my partner that i was reducing students to mere digits and there would be possible backlash. well, isnt classifying the marks to Band 1 Band 2 Band 3 reducing students to digits as well.

  4. 6 raelynn 27 October 2009 at 22:33

    i suddenly realised that the really really small group of ppl that got 100marks for math exams were probably not part of the group of people whom you would classify as “they’ve had a lifetime of flawless exam results”, because whether we like it or not, people in SIM at one point of time fell out of the radar of the O or A levels to be “good enough” for public uni.

    while i agree with your analysis (including the graph), but by being more transparent with the moderation process, the public and SEAB can be informed of whether the paper is of appropriate difficulty. maybe that’s why there’s always this impossible question to solve in the paper. so that they dont get 100marks.

    i agree with the previous commenter that the only ones who benefit are the private institutions or tuition centers that teach “creative math problem solving”. creative math problem solving depends on the student’s grasp of the fundamentals of the concepts, which unfortunately school teachers leave out because exam papers dont require you to explain or use these fundamentals, but rather just the “end result” of the concept, which is a formula/formulae. this is the rude shock that i got from doing university level mathematics, and what i’m doing now is truly back to basics but in depth or these basics, making me more appreciative of the stuff that i learn in A level and O level.

  5. 7 Lee Chee Wai 27 October 2009 at 22:49

    I think a good piece of research to conduct would be an analysis of how well students did on each question. The raw data, if the ministry of education cares to be transparent about it, is certainly available.

    We can start with an average student score for each question and use that as a gauge of the overall difficulty of each question. We can then proceed to construct a statistical profile of student scores for each question. The results should be useful in determining if Mama Lee’s assertion that there was a shift toward harder questions is correct. It will also help people see if the distribution of question-difficulty satisfies Alex’s expectations (which I agree with).

    The constant back-and-forth between parents and the ministry with no publicly available data or evidence is just not helpful.

  6. 9 Teacher Sim 28 October 2009 at 08:57

    I am a tuition teacher who has analyzed the difficult questions in this year’s PSLE.

    MOE is big on thinking skills and has taught heuristics skills to students such as using models, guess and check, deductive reasoing etc to solve maths questions.

    My biggest beef with this year’s paper is that the ratio question can best be solved in the shortest time using algebra . A check with different online solutions forum provides the proof as well.

    However, algebra as a heuristic skill is not taught explicitly in primary schools ! Although the ratio question can be solved by models, the method is tedious and complex in this case.

    If you test what is not taught, is it fair ?

    Is it better to test this sort of IQ maths thinking skills at the Secondary level instead of primary level when students are more mature, has been taught algebra and can use different heuristic methods to solve it ?

    As it is, some of the questions test English understanding skills and not Maths thinking skills.

    E.g :

    A string of 2 big balloons is 90 cm. A string of 5 small balloons is 1.2m
    If both strings are of the same length, there would be 105 more small balloons than the big balloons. How many balloons are there altogether?

  7. 10 me 28 October 2009 at 10:09

    If questions are difficult but within the syllabus, isn’t it the teachers who are at fault rather than the exam?

    • 11 Mama Lee 29 October 2009 at 15:44

      I have seen quite a number of the Math questions that are IQ type questions that are not within the syllabus. As for Science, have you seen the P5 and P6 textbooks? No student will pass PSLE Science by relying soley on the school Science textbooks! The poor teachers face tremendous pressure trying to second guess the SEAB every year.

  8. 12 Selwyn 28 October 2009 at 11:14

    Exam questions need to be spread sufficiently to be able to differentiate student ability. However, I think there is no point setting exam questions across the entire spectrum to tease out the “truly bright” or determine how badly the student needs help. Exam questions only need to cover a few standard deviations. There is no need for questions 1, 2, 9, 10 in your chart, and questionable whether 3 or 8 need to be there. This comes from the property of the Gaussian distribution where 95% of the students will lie within 2 standard deviations from the mean. The purpose of a school exam is not an IQ grading test, but to test that students are of a certain standard. (If the purpose is the former, then there is indeed need to cover as huge range of standard deviations.) Once students fall below a certain standard they uniformly need help, and it does not matter if the students exceed a certain standard because they have for all purposes attained high enough standards.

    The problem I see happening now is not that the exam questions are too closely clustered about the mean difficulty, but are either clustered or spread (I don’t know which) about a higher than mean difficulty.

    • 13 yawningbread 28 October 2009 at 22:39

      Selwyn – we have a Gifted Education stream. Without going into whether that’s a desirable thing or not, I am told (but I have no idea how reliable this information is) that the Gifted program is only for the top one or two percent. My guess is that the PSLE may have been tweaked in recent years to be capable of teasing out the extremes.

  9. 14 TS 28 October 2009 at 13:18

    But while the exams get harder, the Singaporeans who are unable to cope get left behind more and more, due to the unforgiving and narrow minded nature of the entire system.

  10. 15 Jackson Tan 28 October 2009 at 16:51

    This reminds me of a particular physics course I took during my undergraduate days (actually, it’s all courses by this particular German lecturer). His exams and tests are notoriously difficult (but they’re very original and really test our understanding). An average student can answer about only half the questions. It takes a remarkable genuis to give a complete set of correct answers.

    However, since our grades are given relative to other students, his method of examination does not “kill”. And it does really seek out those truly smart people from the exam-smart people, because he really tests our understanding.

    Nonetheless, it can be pretty depressing to walk out of the exam hall with only half the questions diffidently answered and the other half with some sort of nonsense written down hoping to catch one mark or two. I suppose the root cause is our culture in which it is expected of an average student to be able to answer, say, 75% of the questions; and an above-average student about 95%. Is this good? From my reflections upon the above experience, the answer is no, because there is so much more to be gained by a difficult exam than an easy one. So it is up to parents and teachers to break down this cultural expectation of ours.

    On a tangential note, if this video is not a hoax, Australian parents can be more unreasonable.

  11. 16 Jackson Tan 28 October 2009 at 17:24

    Another point, I had a Vietnamese friend who majors in mathematics (and he’s one almighty intelligent dude). He told me that the scholarship examination in Vietnam is pretty screwed up. The situation pretty much correspond to the second chart in the article, but there are only limited places (obviously).

    Many people could give perfect answers, so what the examiners did was to go after pedantry: how your answer should be precisely written, the exact way it should be presented, and so on. Basically, the stuff that has nothing to do with the students’ mathematical capabilities.

    Do we want a situation like that in Singapore?

  12. 17 Tan Ah Kow 28 October 2009 at 18:30

    In a sense, as you pointed out, tough test serve a purpose but for that matter all testing regime serve a purpose. For what purpose is basically a matter for the constructor of the regime. But conceptually, the purpose of testing falls into two basic categories: to establish competencies or to discriminate.

    If you devise a regime whose sole purpose is to discriminate than it is inevitable you are going to have to deal the “expectation” issue also known as the “Hawthorne” effect. Whilst not uniquely a Singapore phenomena (see UK controversy about A-Level standards), the issue is more heighten because of the kiasuism.

    If you devise a regime whose sole purpose is to determine competencies, in other words, you are tested on whether you meet a set criteria as opposed to ranking, that the pain of “expectation” is somewhat reduced. Example of these kind of test is a driving test — i.e. all Drivers are awarded the same license if they pass. In much of Scandinavian education system, it is more about getting all to meet minimal standards of achievement. I suppose it is probably a product of an egaritrian society where the purpose of education is raise the competencies of the nation rather than to discriminate.

    On your point that a test where the questions covers the middle part of the bell curve is uselss, you can’t really say that such a test is useless if the intended purpose is to establish competencies.

    I think the larger question, which one should be asking the ministry of eduction is what is such generalised testing for? To discriminate or to establish competencies?

    If the testing regime was meant to discriminate (on a nation-wide scale) than it should come clean and spell that clearly, as you have in your point about relative score. Of course, the even wider question is you now have the ability to discriminate or tease out the “bell curve” natural tendencies of people, to what purpose do you want to use such results?

    If the testing regime was meant to discriminate for the sole purpose of enabling schools to select than why not let schools establish their own regime that relfects the school’s mission?

    • 18 yawningbread 28 October 2009 at 22:35

      Tan Ah Kow – that’s a good point. A lot depends on what tests are for. Increasingly, our education system is one with divergent streams post-Primary Six. Whether that’s good or bad is a separate discussion which I won’t get into here. That being the case the PSLE will need to be able to band the kids.

  13. 19 Larry 29 October 2009 at 20:50

    Am I reading the graphs right? They seem to imply that few students are able to answer the easy questions.

  14. 20 Kim 30 October 2009 at 04:55

    Larry, I think questions 1 to 10 are not arranged in any order of difficulty.

    Alex, thanks for the article. I have learnt from it something that I am ashamed to admit, evaded me throughout my 18 years of education. I have always wondered why normal homework and tests (we call them, CAs — “continual assessments” back then) were always so easy and in major exams the difficulty went up a few notches. I always thought either it was just schools’ way of springing nasty surprises on students or that it was just life ie. something that did not have an explanation.

    Now I FINALLY know it is Singapore’s brand of trying to discriminate the good from the bad and the super-good from the average-good. I cannot believe that I went through the whole system that spanned almost two decades of my life without knowing the purpose. So much for an effective education system.

    But that was back then (when rote-learning could get you quite far) and I am sure it is different now. Although I sailed through the system with flying colours, now I can finally “use those colours”. Thanks again Alex. You never fail to impress me with your articles.

  15. 21 RSE 31 October 2009 at 09:44

    Just a friendly nitpick. The graph makes no sense whatsoever, as can be seen by the confusion of the commenters above. Your graph axes for the bell curve. The horizontal axis should be the number of questions answered correctly! Q1, Q2, Q3 are unordered categories, and cannot be used as a horizontal axis for the normal distribution function. Bars instead of curves should be used if you want to be strict because the number of questions correct is a discrete quantity. You can use curves if marks obtained is the horizontal axis because it’s nearly continuous, and it’s convention.🙂

    Discriminating against the very good from the merely good? I find this notion absolutely appalling at this stage of development. Tests (if any) at this stage should only discriminate between those who know the basics from those who don’t. Testing for this purpose tells me MOE is ignorant of decades of peer-reviewed literature on childhood psychology and educational development.

    However, let’s put this aside for the moment and accept that this is a valid purpose of a test. As an engineer who publishes papers that use quite a bit of mathematics, I get asked to tutor kids for their math. I used to do so cheerfully, after all, I believe it is good for kids to learn to like and understand mathematics. As a side-effect of this, I have seen these tests. I have been exposed to the classroom teaching techniques in mathematics here. My experience has dispelled the notion that math is being tested. What is tested is rote memorization of arbitrary techniques resembling math applicable to a very limited domain (exam questions). All questions have a standard wording and a fixed formula. In fact, Some questions are worded extremely poorly (and ambiguously) and can only be solved ‘correctly’ if you previously have been taught the ‘interpretation’. The large number of question types ensure that fundamentals are completely neglected, and have to be neglected if a student is to score in a test. The test needs to be taught for a student to pass.

    Now, taking into consideration of the above, I find the supposedly ‘hard’ questions are of two types. The first are questions with more steps to be memorized and having more complicated formula or having using large numbers. In other words, extremely tedious to solve. The other type, are ‘surprise’ questions, the kind that isn’t encountered normally in textbooks, but for some reason are encountered in creative problem tuition centers… Mind you, these centers are still rote learning. Thus you end up discriminating between children who have daddies rich enough to send them to creative problem solving centers and those who don’t.

    Even if you ignore that, there is a limit to how finely you can discriminate in a test. It takes skill and resources to both set and mark questions that can discriminate finely. Face it, most of our math teachers just do not have that capability, which is why fixed patterns of questions with a fixed answer scheme is used in the first place! As a result, reconstructing fixed formulas is tested, and not math. That resulted in ‘teaching the test’, which resulted in the frankly poor math fundamentals of our students. Luckily for Singapore, having strong fundamentals in mathematics is largely irrelevant to real life, unless you count performing solid research in math, sciences, social sciences and engineering real life.

    Remember, you get what you test for, and your tests can only test so much. We do not test for fundamentals. And not knowing fundamental statistics results in poorly drawn and labeled graphs😉.

  16. 22 daniel 1 November 2009 at 13:05

    But the question still unaddressed and which still bugs me a lot is the problem of raising standards. How are the rubrics for levels of difficulty determined? How to keep the bar from being raised constantly? It’s not hard to create more difficult questions actually. All one needs to do is give Secondary-level questions to Primary-level kids as is happening now. What are the caps? How do we draw the line?

    My own take of the matter is that no matter how hard the questions get, there will still be teachers, parents, private tutors and students able to pre-empt them, putting in extra hours and effort to find ways to beat the exam requirements.

    The result is not only an education process that is fixated solely on examination performance but teachers and students left with no time to do anything fun and creative during normal curricular hours.

    So what if we can sieve out the creme de la creme of a cohort with an exam if education gets turned into no more than an mechanical exercise?

  17. 23 Kenneth 10 November 2009 at 19:23

    Very fine resolution is needed for the PSLE for the simple reason that it is used to allocate students to schools.

    With roughly 50,000 students per cohort and maybe 300-400 places in each school, teasing out the top 1% is important.

    Besides, Alex is correct in saying that the PSLE tolerates variance in difficulty because results are entirely relative to the competition.

    And I think discouraging students using an examination that will not impact what opportunities are available to them later in life is an excellent idea to build character. Not that I imagine MOE sees things this way.

  18. 24 dudette 20 November 2009 at 22:25

    Alex, the first problem with tough tests are that students are selected for schools based SOLELY on 1-2 point differences in test results. Of course parents care about the results. Have you tried finding a job in Singapore recently without a good degree?

    Frankly, I have no problem with students being urged by their parents to take on charity work, meaningful projects, character building classes. I only wish it was counted.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: