I've long been interested in why, despite so much organized effort, there percentage of women in CS has been so stagnant. One hypothesis I had for some time was that the efforts themselves were unintentionally counter-productive: that they reinforced the gender subtyping of "female computer scientist" being separate from unmarked "computer scientists".
I was excited earlier this week when Siobhan Stevenson alerted me to this unpublished thesis from OISE: "Women in Computing as Problematic" by Susan Michele Sturman (2009).
In 2005-6, Sturman conducted an institutional ethnography of the graduate CS programmes at two research-intensive universities in Ontario. In institutional ethnography, one starts by "reading up": identifying those who have the least power and interviewing them about their everyday experiences. From what the interviews reveal, the researcher then goes on to interview those identified as having power over the initial participants.
Interested in studying graduate-level computer science education, she started with female graduate students. This led her to the women in computing lunches and events, interviewing faculty members and administrators at those two universities. She also attended the Grace Hopper Celebration of Women in Computing (GHC) and analysed the texts and experiences she had there. Her goal was to understand the "women in computing" culture.
In the style of science studies scholars like Bruno Latour, Sturman comes to the organized women in computing culture as an outsider. As a social scientist, she sees things differently: "Women in the field wonder what it is about women and
women's lives that keeps them from doing science, and feminists ask what
it is about science that leads to social exclusion for women and other
marginalized groups"
Featured Post
Moving my blog! New url is https://patitsas.github.io/
Hi everybody, I'm migrating my blog to https://patitsas.github.io/blog/ to take advantage of the simplicity of blogging with hexo. RS...
Monday, March 14, 2016
"'Women in Computing' As Problematic": A Summary
Labels:
feminist analysis,
history of cs,
institutional ethnography,
literature summaries,
social theory,
sociology,
women in cs
Friday, February 19, 2016
Getting Fedora 23 working on an Asus Zenbook UX305CA (Intel Skylake)
I recently acquired a shiny new Asus Zenbook UX305CA to replace my old UX32A which had been dying a slow death for the past year.
Excitedly, I put the latest Fedora release (23) on the computer, using the Cinnamon spin. While the computer ran Fedora, the screen resolution was set at 800x600 with no other options.
The issue? The Intel Skylake chip in the computer wasn't supported by the kernel that Fedora 23 ships with (kernel version 2.3). Like many linux users with new laptops I've found myself in a bit of an adventure with the new skylake chip. I thought I'd write up how I eventually got Fedora 23 working on this computer for the sake of those following the same path.
To get linux working with kernel 2.3, I found the Arch Wiki invaluable:
Once both of those were done my computer was working, but without hardware acceleration. The next step was to install kernel 4.4, which supports Skylake.
Once all that was done, the computer's working quite nicely!
Excitedly, I put the latest Fedora release (23) on the computer, using the Cinnamon spin. While the computer ran Fedora, the screen resolution was set at 800x600 with no other options.
The issue? The Intel Skylake chip in the computer wasn't supported by the kernel that Fedora 23 ships with (kernel version 2.3). Like many linux users with new laptops I've found myself in a bit of an adventure with the new skylake chip. I thought I'd write up how I eventually got Fedora 23 working on this computer for the sake of those following the same path.
To get linux working with kernel 2.3, I found the Arch Wiki invaluable:
- I needed the kernel boot argument: i915.preliminary_hw_support=1
- And then you set xorg.conf as described in the Arch Wiki
Once both of those were done my computer was working, but without hardware acceleration. The next step was to install kernel 4.4, which supports Skylake.
- You'll want to add the repository where Fedora keeps the latest kernel versions: I found 4.4 in kernel-vanilla-stable (see instructions here)
- Then, once I tried booting with kernel-4.4, I got an error at boot: "double free at 0x(address) Aborted. Press any key to exit". To get rid of the error, I found I had to temporarily disable the validation steps of the new kernel as described in comment 18 on the bugzilla report.
- The mokutil utility will ask you to set a password for altering safe boot. Write it down. When you reboot it will ask for the password on a character by character basis, where the order of the characters is random. I wound up failing this the first time because I assumed the password should be 0-indexed; it's actually 1-indexed.
- Once I had insecure boot turned on, I could successfully boot kernel-4.4! But cinnamon informed me that software rendering was still on. To solve this, I had to undo what I'd done to make kernel-4.2 work: take out the i915.preliminary_hw_support=1 and set xorg.conf to what is recommended for Intel graphics in general rather than the Skylake bandaid (you just take out the options line).
Once all that was done, the computer's working quite nicely!
Thursday, January 28, 2016
On Paulo Freire, and seeing computing as literacy
Paulo Freire was a Brazilian educator, best known for his book Pedagogy of the Oppressed. Indeed, it's the most commonly assigned reading in education classes which isn't a textbook. His ideas have been used for teaching many topics, such as health and African American studies. And yet, most people in CS education circles aren't familiar with Freire. In this post I'll provide a short introduction to Freire and why his work is relevant to computing education.
To Freire, education is an inherently political act. Education can be a tool of empowerment, and it can also be a tool of oppression. Freire refered to traditional education as the "banking model": the teacher deposits coins of knowledge into the bank accounts of the students. "Instead of communicating, the teacher issues communiques and makes deposits which the students patiently receive, memorize, and repeat. This is the "banking" concept of education, in which the scope of action allowed to students extends only as far as receiving, filing, and storing the deposits." (Freire, 1968)
This model ignores what the student already may know. It fails to give the students a sense of ownership over their knowledge, and fails to stimulate critical thinking. He argued this reinforces oppression. For education to be empowering, students need to be active agents in their own learning.
To Freire, education is an inherently political act. Education can be a tool of empowerment, and it can also be a tool of oppression. Freire refered to traditional education as the "banking model": the teacher deposits coins of knowledge into the bank accounts of the students. "Instead of communicating, the teacher issues communiques and makes deposits which the students patiently receive, memorize, and repeat. This is the "banking" concept of education, in which the scope of action allowed to students extends only as far as receiving, filing, and storing the deposits." (Freire, 1968)
This model ignores what the student already may know. It fails to give the students a sense of ownership over their knowledge, and fails to stimulate critical thinking. He argued this reinforces oppression. For education to be empowering, students need to be active agents in their own learning.
Wednesday, January 27, 2016
Impostor syndrome viewed through the lens of social theory
Sociologists like to use performance as a metaphor for everyday life. Erving Goffman in particular championed the metaphor, bringing to light how our social interactions take place on various stages according to various scripts. And when people don't follow the right script on the right stage, social punishment ensues (e.g. stigma).
Pierre Bourdieu rather similarly described social interactions as taking place in arenas, seeing them more like games than plays. (Sometimes champs is translated as 'field' rather than arena; it's worth noting Bourdieu intended for it to have a connation of sport/war.) Rather than a script, people get a sense for the rules of the game. And when people don't follow the rules of the game, social punishment ensues.
Whether one is failing at a social game or performance, social punishment can take many forms. For example, sexual harassment is most reported by those who go against gender roles. Powerful women are more likely to be harassed than less powerful women. Women in male-dominated fields are more likely to be harassed. Men who are effeminate, gay, or champions of feminism, are more likely to be harassed. Harassers act to keep people "in their place".
Since not following the script/game is costly for individuals, we're trained from a young age to be on the lookout for cues about what stage/arena we're on and what role we should be playing. Looking for and responding to cues is something we do automatically most of the time. Kahneman would see it as an example of System 1 thinking.
Impostor syndrome is the sense that you're the wrong person to be playing the role you're in. You're acting a role that you've been trained in and hired for -- but your brain is picking up on cues that signal that you're not right for the role.
Pierre Bourdieu rather similarly described social interactions as taking place in arenas, seeing them more like games than plays. (Sometimes champs is translated as 'field' rather than arena; it's worth noting Bourdieu intended for it to have a connation of sport/war.) Rather than a script, people get a sense for the rules of the game. And when people don't follow the rules of the game, social punishment ensues.
Whether one is failing at a social game or performance, social punishment can take many forms. For example, sexual harassment is most reported by those who go against gender roles. Powerful women are more likely to be harassed than less powerful women. Women in male-dominated fields are more likely to be harassed. Men who are effeminate, gay, or champions of feminism, are more likely to be harassed. Harassers act to keep people "in their place".
Since not following the script/game is costly for individuals, we're trained from a young age to be on the lookout for cues about what stage/arena we're on and what role we should be playing. Looking for and responding to cues is something we do automatically most of the time. Kahneman would see it as an example of System 1 thinking.
Impostor syndrome is the sense that you're the wrong person to be playing the role you're in. You're acting a role that you've been trained in and hired for -- but your brain is picking up on cues that signal that you're not right for the role.
Labels:
bias,
bourdieu,
goffman,
impostor syndrome,
psychology,
social theory
Thursday, January 21, 2016
CS grades: probably more normal than you think they are
It's commonly said that computer science grades are bimodal. And people in the CS education community have spent a lot of time speculating and exploring why that could be. A few years back, I sat through a special session at ICER on that very topic, and it occurred to me: has anybody actually tested if the grades are bimodal?
From what I've seen, people (myself included) will take a quick visual look at their grade distributions, and then if they see two peaks, they say it's bimodal. I've done it.
Here's the thing: eyeballing a distribution is unreliable. If you gave me some graphs of real-world data, I wouldn't be able to tell on a quick glance whether they're, say, Gaussian or Poissonian. And if I expected it to be one of the two, confirmation bias and System 1 Thinking would probably result in me concluding that it looks like my expectation.
Two peaks on real world data don't necessarily mean you have a bimodal distribution, particularly when the two peaks are close together. A bimodal distribution means you have two different normal distributions added together (because you're sampling two different populations at the same time).
It's quite common for normal distributions to have two "peaks", due to noise in the data. Or the way the data was binned. Indeed, the Wikipedia article on Normal distribution has this histogram of real world data that is considered normal -- but has two peaks:
And since this graph looks in all honesty like a lot of the grades distributions I've seen, I decided I'd statistically test whether CS grades distributions are normal vs. bimodal. I got my hands on the final grades distributions of all the undergraduate CS classes at the University of British Columbia (UBC), from 1996 to 2013. That came out to 778 different lecture sections, containing a total of 30,214 final grades (average class size: 75).
One way is to fit your data to whatever formula describes that distribution. You can then eyeball whether your resulting curve matches the data, or you could look at the residuals, or even do a goodness of fit test. (It's worth noting that you could fit a normal distribution as bimodal -- the two sub-distributions would be extremely close together! If you can fit a normal distribution to it, this is a simpler explanation of the data -- Occam's razor and all.)
Another way is to use a pre-established statistical test which will allow you to reject/accept a null hypothesis about the nature of your data. I went this route, for the ease of checking hundreds of different distributions and comparing them.
There are a large variety of tests for whether a distribution is normal. I chose Shapiro-Wilk, since it has the highest statistical power.
There aren't as many tests for whether a distribution is bimodal. Most of them work more or less by trying to capture the difference in means in the two distributions that are in the bimodal model, and testing whether the means are sufficiently separate. I used Hartigan's Dip Test, because it was the only one that I could get working in R #OverlyHonestMethods.
I also computed the kurtosis for every distribution, because I had read that a necessary but not sufficient condition for bimodality is that kurtosis < 3. When you do thousands of statistical tests, you're gonna have a lot of false positives. To minimize false positives, I only used Hartigan's Dip Test on distributions where the kurtosis was less than 3. I set my alpha value at 0.05, so I expect a false positive rate of 5%.
Next I applied Hartigan's Dip Test to the 323 classes which had a kurtosis less than 3. For this test, the null hypothesis is that the population is unimodal. As a result, if p is less than alpha, then we have a multimodal distribution. This was the case for 45 classes (10% of those tested, 5.8% of all the classes).
For the Shapiro-Wilk test, the null-hypothesis is that the population is normally-distributed. So, if the p value is less than the alpha value, we can say the population is not normally distributed. This was the case for 106 classes.
44 of the 45 classes which were previously determined to be multimodal were amongst the 106 classes which the Shapiro-Wilk test indicated weren't normally-distributed. In short, 13.6% of the classes weren't normal, many of which are known to be multimodal.
For the 86.4% of classes where we failed to reject the null hypothesis, we can expect but not guarantee due to type II error that they are normal. I've got a large sample size, and good statistical power. From bootstrapping a likely beta value, I estimate my false negative rate is around 1.48%.
Bottom line: An estimated 85.1% of the final grades in UBC's undergrad CS classes are normally-distributed. 5.8% of the classes tested as being bimodal, which isn't a whole lot more than the false positive rate I'd expect to see (5%).
I couldn't get my hands on a similar quantity of data from my home institution (U of Toronto). But every U of T class I could test was normally-distributed (n=5). Including classes that I'd taught, where I'd eyeballed the grades, and then told my colleagues/TAs/students that my grades were bimodal. Oops.
Since I thought CS classes were bimodal, when I looked at my noisy grades distributions, I saw bimodality. Good old System 1 Thinking. Had I taken the time to fit my data, or statistically test it, I would have instead concluded it was normally-distributed.
I'm currently reading Stephen Jay Gould's The Mismeasure of Man, and this part stuck out for me: "Statisticians are trained to be suspicious of distributions with multiple modes." Where you see multiple modes, you're likely either looking at a lot of noise -- or two populations are improperly being sampled together.
Why are CS distributions so noisy? My colleague Nick Falkner recently did a series of blog posts on assessments in CS classes, and how they're so truly ugly. And my colleagues Daniel Zingaro, Andrew Petersen and Michelle Craig have written a couple of lovely articles which together paint a story that if you ask students a bunch of incremental small concept questions, rather than one giant all-encompassing code-writing question, you get grades distributions which look more normal. How we assess our students affects what sort of distribution we get.
Perhaps once we as CS educators figure out better ways to assess our students, our grades distributions won't be quite so noisy -- and prone to miscategorization?
From what I've seen, people (myself included) will take a quick visual look at their grade distributions, and then if they see two peaks, they say it's bimodal. I've done it.
Here's the thing: eyeballing a distribution is unreliable. If you gave me some graphs of real-world data, I wouldn't be able to tell on a quick glance whether they're, say, Gaussian or Poissonian. And if I expected it to be one of the two, confirmation bias and System 1 Thinking would probably result in me concluding that it looks like my expectation.
Two peaks on real world data don't necessarily mean you have a bimodal distribution, particularly when the two peaks are close together. A bimodal distribution means you have two different normal distributions added together (because you're sampling two different populations at the same time).
It's quite common for normal distributions to have two "peaks", due to noise in the data. Or the way the data was binned. Indeed, the Wikipedia article on Normal distribution has this histogram of real world data that is considered normal -- but has two peaks:
And since this graph looks in all honesty like a lot of the grades distributions I've seen, I decided I'd statistically test whether CS grades distributions are normal vs. bimodal. I got my hands on the final grades distributions of all the undergraduate CS classes at the University of British Columbia (UBC), from 1996 to 2013. That came out to 778 different lecture sections, containing a total of 30,214 final grades (average class size: 75).
How do you test for normality vs bimodality?
There are a bunch of ways to test whether some data are consistent with a particular statistical distribution.One way is to fit your data to whatever formula describes that distribution. You can then eyeball whether your resulting curve matches the data, or you could look at the residuals, or even do a goodness of fit test. (It's worth noting that you could fit a normal distribution as bimodal -- the two sub-distributions would be extremely close together! If you can fit a normal distribution to it, this is a simpler explanation of the data -- Occam's razor and all.)
Another way is to use a pre-established statistical test which will allow you to reject/accept a null hypothesis about the nature of your data. I went this route, for the ease of checking hundreds of different distributions and comparing them.
There are a large variety of tests for whether a distribution is normal. I chose Shapiro-Wilk, since it has the highest statistical power.
There aren't as many tests for whether a distribution is bimodal. Most of them work more or less by trying to capture the difference in means in the two distributions that are in the bimodal model, and testing whether the means are sufficiently separate. I used Hartigan's Dip Test, because it was the only one that I could get working in R #OverlyHonestMethods.
I also computed the kurtosis for every distribution, because I had read that a necessary but not sufficient condition for bimodality is that kurtosis < 3. When you do thousands of statistical tests, you're gonna have a lot of false positives. To minimize false positives, I only used Hartigan's Dip Test on distributions where the kurtosis was less than 3. I set my alpha value at 0.05, so I expect a false positive rate of 5%.
Test results
Starting with kurtosis: 323 of the 778 lecture sections had a kurtosis less than 3. This means that 455 (58%) of the classes were definitely not bimodal, and that at most 323 (42%) classes could be bimodal.Next I applied Hartigan's Dip Test to the 323 classes which had a kurtosis less than 3. For this test, the null hypothesis is that the population is unimodal. As a result, if p is less than alpha, then we have a multimodal distribution. This was the case for 45 classes (10% of those tested, 5.8% of all the classes).
For the Shapiro-Wilk test, the null-hypothesis is that the population is normally-distributed. So, if the p value is less than the alpha value, we can say the population is not normally distributed. This was the case for 106 classes.
44 of the 45 classes which were previously determined to be multimodal were amongst the 106 classes which the Shapiro-Wilk test indicated weren't normally-distributed. In short, 13.6% of the classes weren't normal, many of which are known to be multimodal.
For the 86.4% of classes where we failed to reject the null hypothesis, we can expect but not guarantee due to type II error that they are normal. I've got a large sample size, and good statistical power. From bootstrapping a likely beta value, I estimate my false negative rate is around 1.48%.
Bottom line: An estimated 85.1% of the final grades in UBC's undergrad CS classes are normally-distributed. 5.8% of the classes tested as being bimodal, which isn't a whole lot more than the false positive rate I'd expect to see (5%).
Discussion
I've only analyzed distributions from one institution, so you might be thinking "maybe UBC is special". And maybe UBC is special.I couldn't get my hands on a similar quantity of data from my home institution (U of Toronto). But every U of T class I could test was normally-distributed (n=5). Including classes that I'd taught, where I'd eyeballed the grades, and then told my colleagues/TAs/students that my grades were bimodal. Oops.
Since I thought CS classes were bimodal, when I looked at my noisy grades distributions, I saw bimodality. Good old System 1 Thinking. Had I taken the time to fit my data, or statistically test it, I would have instead concluded it was normally-distributed.
I'm currently reading Stephen Jay Gould's The Mismeasure of Man, and this part stuck out for me: "Statisticians are trained to be suspicious of distributions with multiple modes." Where you see multiple modes, you're likely either looking at a lot of noise -- or two populations are improperly being sampled together.
Why are CS distributions so noisy? My colleague Nick Falkner recently did a series of blog posts on assessments in CS classes, and how they're so truly ugly. And my colleagues Daniel Zingaro, Andrew Petersen and Michelle Craig have written a couple of lovely articles which together paint a story that if you ask students a bunch of incremental small concept questions, rather than one giant all-encompassing code-writing question, you get grades distributions which look more normal. How we assess our students affects what sort of distribution we get.
Perhaps once we as CS educators figure out better ways to assess our students, our grades distributions won't be quite so noisy -- and prone to miscategorization?
Labels:
assessment,
bias,
grades distributions,
statistics
Thursday, December 17, 2015
A brief introduction to social theory
Theories from psychology enjoy a fair bit of use in computer science education, but education is not merely a cognitive process: it's also a social one.
I've found it useful to learn about social theory as a CS education graduate student, and I thought I'd share a quick introduction to social theory that I initially wrote for my research proposal to my thesis committee this fall.
I've found it useful to learn about social theory as a CS education graduate student, and I thought I'd share a quick introduction to social theory that I initially wrote for my research proposal to my thesis committee this fall.
Classical Social Theory
Classically, sociology has had four major schools of thought, each of which goes by various names and is associated with one of the four "founders" of sociology:- Auguste Comte (1798-1857) coined the term "positive philosophy", now better known as positivism. Comte’s sociology was inspired by the French Revolution: sociology was envisioned as a means to produce the perfect society. (Indeed, Comte was incensed that the lower classes wouldn't simply accept their "place" in society.)
In Comte's world, one would test out different ideas for how to run a society, and find the optimal approach. While Comte himself argued for holism, (post)positivism has since come to be associated with reductionism. - The sociology of Max Weber (1864-1920) contrasts with Comte's: Weber was a proponent of anti-positivism (also known as constructivism or interpretivism). Weber saw verstehen (understanding) as the goal of research, rather than hypothesis verification. Weber theorized upon social stratification; he also wrote about closure (how groups draw the boundaries and construct identities, and compete with out-group members for scarce resources.)
Weber's sociology put an emphasis on ideology. Capitalism, for example, was the result of ideological conditions unique to Northern Europe; capitalism has only succeeded where these ideological conditions hold. - Emile Durkheim (1858-1917) built on Comte's positivism, setting forth structural functionalism. In structural functionalism, a society is viewed like a biological cell: different parts of a society are likened organelles. Durkheim's sociology looks at how the parts work together to comprise the whole. It is also holistic -- much of how systems thinking was used in the social sciences built on Durkheimian notions of society.
- Finally, Karl Marx (1818-1883) provided an approach which contrasts with Durkheim's: instead of seeing harmony, it emphasizes the role of class conflict in society and the historical-economic basis thereof. Marxist sociology has also been known as conflict theory, though the historical-economic basis is not the only way one could study conflict. For example, Weberians see conflict rooted in ideology, rather than in a clash over resources.
20th Century Social Theory
While classical theory is often referred to in terms of thinkers (Marx, Weber, etc), the more modern movements tend to be known more by schools of thought. Some of the major ones would be:- Neo-Marxism refers to the 20th century updates of Marxist theory, which
has pulled in Weberian and poststructuralist work on status and power.
Antonio Gramsci is a well-known neo-Marxist, who was curious about the
question of why the revolution Marx had predicted never seemed to come about. Gramsci is most famous for his theory of cultural hegemony.
Critical theory is also based on Marxist thought, and is often conflated with it. Critical theory emphasizes praxis, the combination of theory and practice. Critical theory is most associated with the Frankfurt School, including names such as Theodor Adorno and Jurgen Habermas. - Interactionism assumes that all social processes are the result of human interaction. It emerged in the early 20th century. Interactionaists focus their studies on the interactions between individuals. As a result, interactionists do not `see' the effects of physical environment -- or even solitary thought/work. They also reject quantitative data in favour of qualitative approaches: grounded theory and ethnomethodology were both developed by interactionists. The notion of social interaction as a performance was first developed in interactionist thought; poststructuralists have since refined it. While there's no single name associated with this perspective, some associated names include George Herbert Mead, Erving Goffman, and Dorothy Smith.
Sociologists also look at social systems at different levels: the macro level looks at entire societies, nations, etc; the meso level looks at organizations, institutions, etc; the micro level looks at individuals. While classical sociology was generally macro, interactionism focuses on the micro level. - Structuralism might be seen as a macro-focused backlash against interactionism's focus on the micro. Structuralism sees social processes as stemming from larger, overarching structures, and also emerged in the early 20th century. Structuralists see society as being governed by these structures in a somewhat analogous fashion to how physicists may see the universe as being governed by laws of nature. A criticism of structuralism is that it sees these structures as fixed; in contrast a Marxist would focus on historical change. Some structuralists include Claude Levi-Strauss, Ferdinand de Saussure, and Jean Piaget.
- Poststructuralism (more or less interchangeable with "postmodernism") is not a particularly coherent school of thought. This is not altogether surprising as the key poststructuralists both reject the label and the very notion that there is such a real thing as poststructuralism. Poststructuralists reject the idea of "objective" knowledge: since the study of sociology is done by humans who are biased by history and culture, they argue that any study of a social phenomenon must be combined with how the study of that social phenomenon was produced. For example, a poststructuralist would not take a concept like `gender' as a given, but problematize the concept. Poststructuralism evolved out of structuralism in the mid 20th century. Some major poststructuralists include Michel Foucault, Jacques Derrida, and Judith Butler.
Thursday, December 18, 2014
A Survey of High School CS in Canada
I thought I'd have a look at the status of high school CS across Canada. (I'm keeping this to provinces/territories that have a population greater than 500,000.)
Overall, it's generally categorized as an elective, often lumped in with fine arts and second languages. BC, Alberta and Manitoba are the only provinces where it is categorized as a teachable subject at the BEd level; in Ontario it is available as a minor. (Quebec, as usual, is a beast of its own not easily comparable to the other provinces.)
Western Canada seems to be leading the pack for standards and teacher support -- the only CSTA chapters in Canada are in BC, Alberta, Saskatchewan and Manitoba.
Atlantic Canada is furthest behind: New Brunswick, Nova Scotia and Newfoundland and Labrador do not appear to have computer science curricula, let alone CS teacher training/support.
Nowhere easily available on the internet could I find stats on how many schools teach CS or how many students take it -- this information is FOIable if anybody really wants to see the numbers. (If you do have the numbers, I'd love to see them!)
British Columbia:
Overall, it's generally categorized as an elective, often lumped in with fine arts and second languages. BC, Alberta and Manitoba are the only provinces where it is categorized as a teachable subject at the BEd level; in Ontario it is available as a minor. (Quebec, as usual, is a beast of its own not easily comparable to the other provinces.)
Western Canada seems to be leading the pack for standards and teacher support -- the only CSTA chapters in Canada are in BC, Alberta, Saskatchewan and Manitoba.
Atlantic Canada is furthest behind: New Brunswick, Nova Scotia and Newfoundland and Labrador do not appear to have computer science curricula, let alone CS teacher training/support.
Nowhere easily available on the internet could I find stats on how many schools teach CS or how many students take it -- this information is FOIable if anybody really wants to see the numbers. (If you do have the numbers, I'd love to see them!)
British Columbia:
- British Columbia does have a CS curriculum. It's lumped in with "computer studies" that is mostly IT. It is categorized as "Applied Skills" rather than a Science or Math.
- CS is not categorized as required course for a high school diploma, but can be counted as an elective topic. BC students need to take 28 credits of elective topics.
- The University of British Columbia offers a BEd in computer science, as does the University of Victoria and the University of Northern BC. I can't find any BSc/BEd combined programmes for CS.
- There is a CSTA chapter for BC.
- I can't find any stats on how many BC schools teach CS. My impression is that it's not uncommon in Vancouver but rare elsewhere.
Subscribe to:
Posts (Atom)