One of the major complaints of businesses in this country is a growing lack of qualified college graduates. One of the major complaints of students is lack of opportunity (defined as too expensive, lack of access, insufficient preparation etc.) Other groups have similar complaints but what it boils down to is a breakdown in the higher education delivery system. The sad thing is there is no reason for this breakdown to be occurring.
This moment in time allows us access to the greatest knowledge transfer mechanisms that have ever been developed. The computer and the internet. We just are not utilizing them properly. Well in my opinion that's the case anyway.
To simplify this post I am just going to talk about the goal of delivering a qualified graduate to a consumer, no matter who they are.
First we have to accept that my proposed model won't work in every case. It will be up to the consumer to define their needs and sometimes that may mandate a more traditional college program.
Second the ultimate customer for the project, will have to be heavily involved in the process. For example Boeing needs Aeronautical Engineers. To really get the type of engineer that they want they would need to lay out a set of skills that they feel are important. From there it would be necessary to backtrack to the courses which develop those skills, and a curriculum would need to be developed. From there textbooks would need to be written and labs developed. On and on continuing up the chain until a comprehensive program had been developed.
Once that process has been completed it is necessary to deliver the required knowledge to the student. Most of the pieces are already in place. Lectures can be developed and delivered via pod cast or youtube (The open courseware project and iTunes university are already doing some of this). Reading assignments can be emailed out. Textbooks and other course materials can be placed on Wikibooks. The two major sticking points as I see it are labs and a feedback mechanism.
Feedback is the easiest - IM, Email, Phones, Blog Comments, all those offer a feedback loop. Testing is another method. Here we have to be careful though. We want the test to be both fair and applicable as well relatively secure. In other words we don't want a bunch of multiple choice questions floating around on the internet that a student can memorize to get a passing grade, but we want the test to really measure knowledge. Part of this problem can be solved by the use of adaptive testing.
Adaptive testing is a method of testing that adapts to an examinees knowledge level of a subject.
CAT successively selects questions so as to maximize the precision of the exam based on what is known about the examinee from previous questions. From the examinee's perspective, the difficulty of the exam seems to tailor itself to their level of ability. For example, if an examinee performs well on an item of intermediate difficulty, he will then be presented with a more difficult question. Or, if he performed poorly, he would be presented with a simpler question. Compared to static multiple choice tests that nearly everyone has experienced, with a fixed set of items administered to all examinees, computer-adaptive tests require fewer test items to arrive at equally accurate scores. (Of course, there is nothing about the CAT methodology that requires the items to be multiple-choice; but just as most exams are multiple-choice, most CAT exams also use this format.)
The biggest problem with adaptive testing in this program would be the development of the question pool. One of it's biggest advantages is it's flexibility, allowing a number of different question types including scenario and simulation questions. In addition the report from the exam can be used to pinpoint where a student has mastery or requires work in a subject. One nice thing about this style of testing is the flexibility it places into the program. A student may already have a high level of mastery in a subject. He would be able to prove that by simply taking the test. As this program goes on this method of testing may be defined to an even greater level of granularity so that certain critical skills are tested for mastery at various points.
Of course adaptive testing will not work for all subjects and scenarios so a network of instructors will need to be developed. These can be industry professionals, local educators, retirees, essentially anyone with the required knowledge who can evaluate the work of the student and provide feedback. A quality control mechanism would be needed to insure that the instructors are adequate, but I think that could be handled by evaluations from students, skill set feedback from employers, and evaluations from other instructors or administrators involved in this program.
On the subject of practical exams, labs, and hands on instructions we would use the same local instructors or partnerships with local businesses. For classes such as chemistry it may be necessary to set up a partnership with a local high school or community college. In these cases the students will have to bear some additional cost. In some cases it may be necessary to set up a regional center where a student can come do a block of labs over a weekend again this may require some additional cost. In all these cases participating industry partners should also help with some of the expense. After all the idea is to deliver a more prepared graduate to them.
After completion of this program the student is awarded a certificate that includes the sponsors of his degree track and a breakdown of the didactic and practical skills he has demonstrated mastery of and is off into the world. Hopefully with the goal of delivering relevant, affordable, and convenient education having been met.
One other option that I think would be interesting is if major state universities partnered with this program. That would simplify many of the start up pains and would also give them the opportunity to cherry pick out the best students for an "elite" education.
Anyway just another idea that will never be adopted.
Education, MIT, iTunes U, Open Courseware, Adaptive Testing