For fourteen years the Department of Mathematics and Computer Science at the University of Denver hosted an annual competition for teams of students from local high schools in Colorado and nearby states to test their computer programming skills. Looking at the Internet Archive's Wayback Machine, they stopped conducting it after 1998. That's when the link to information about it disappears from the department's homepage and a note is added at the bottom of the page:
The department will no longer be conducting the annual High School Programming Contest. We express our thanks to those who have participated over the years.
A page for the contest is also archived, but unfortunately it doesn't look like the Wayback Machine got to any of the files that were linked on it. There were posts announcing it on the co.general Usenet newsgroup in 1995 and 1997. Myself and two of my classmates, Sunjit and Lisa, were at the the thirteenth annual contest in 1997 representing Pomona Senior High School.
If I remember correctly, since unfortunately the rules don't seem to be archived anywhere, you could have a team of 1-3 people. Each team would have three hours to work about 5-7 problems. Your team would have to bring it's own computer and you could write the solutions in any language you'd like. I don't remember a rule saying that you couldn't connect to the Internet or BBSes during the competition, but back then, that would have been difficult to arrange. I don't recall if they let you bring any reference books, although I don't think we did. I believe you were scored on the number of problems you completed and the time you completed them in. This is what the schedule looked like in 1997 (followed by the problems and solutions from that year).
I was reminded of this recently as I've made a project out of scanning a lot of boxes of old documents I've saved, many of them dating back to when I was in high school. To practice for the competition DU made some problems from previous years available, and if you knew people who had been in the competition before sometimes they would xerox copies of them for you. I came across the problems we had from 1997, as well as practice problems from 1996, 1995, 1993, 1992 and 1989. The first one from 1996, T4LK TH3 T4LK, is still my favorite. :)
I'm sad DU isn't still hosting the contest, it was a lot of fun when I was in high school. Maybe if someone at the University of Colorado decided to start a contest like this, he or she could used these problems as a reference. Colorado is conspicuously missing from ACM's list of High School Programming Contests.
I setup ServiceNow's ODBC driver and then setup a Linked Server to connect to one of our ServiceNow instances from SQL Server Management Studio. When you run a SQL query from SSMS on your ServiceNow instance and check the v_user_session table, it seems to be using SOAP to pull the data and then doing any necessary work for joins or aggregates in the driver code.
We've been trying to troubleshoot some problems where data that's in an instance database doesn't always show up in the view mappings of particular tables (the .do URLs also used for the SOAP interface). I've always thought of ODBC drivers as connecting "directly" to a given database, but in this case the driver connects to the model (database) through the view. I have to say it's a pretty creative approach, I wouldn't have thought of implementing an ODBC driver in that way.
Last week, Big Think posted an excerpt of an interview with Clay Johnson, author of The Information Diet, on information obesity (a term Johnson believes is more apt than information overload). I was reminded of it today when I was flipping through a book I'm about to lend to a friend of mine, Mental Health Through Will-Training originally published in 1950, written by neuropsychiatrist Abraham Low (1891-1954). There's a passage he wrote in 1950 that is probably more relevant today that it ever could have possibily been 62 years ago (reproduced from page 195 below).
... our age is hopelessly addicted to the worship of sheer information. Present-day men and women receive the bulk of their education through the channels of information, especially after they have reached adolescence or adulthood and are eligible for what is called "adult education." Then they are given the doubtful benefit of lectures and forums, book reviews, popular expositions on science and psychology, advice in child rearing and family management, instruction on how to make friends and influence people. The implication is that correct information is the surest way to correct action and that all a person needs for improving habits is to be told how to do it. Training, practice and leadership have been radically, and perhaps joyfully, discarded in this weird scheme of life in which grownup persons are expected to repose childlike faith in the magical power of theoretical knowledge. ... The notion that by some trick information can change action and direct impulses has gripped the imagination of the age. ... In [Low's] old-fashioned scheme, information is merely the preliminary to training and practice, not a substitute for leadership.
I'm sure there are some people who, when presented with new information regarding habitual behavior for them, are able to change quickly if deemed important enough. What Low is saying here, however, is an idea is more likely to be transformative if it can be combined with training, leadership and (though not explicitly stated) fellowship. In the same way that reading too much information that affirms our world view is unhealthy, lethean consumption of information that could be transformative if it was later practiced is tragic.
The CU Boulder honor code was adopted while I was a student here. I honestly felt like I had a pretty good intuitive understanding of what cheating was, so I never really cared much about the details of how it was defined other than the pledge that's posted in all of the classrooms.
"On my honor, as a University of Colorado at Boulder student, I have neither given nor received unauthorized assistance."
I'm auditing a class that requires students pass a quiz on the honor code (otherwise you're dropped from the course). To pass the quiz I read the violations, which all seemed pretty common sense to me until the very last violation listed on the page.
Resubmission: Completing original work for one class and then resubmitting the work to another class without permission from both instructors.
For whatever reason, resubmitting assignments doesn't intuitively seem morally or ethically wrong to me. If anything it seems more efficient (e.g. killing to birds with one stone), and efficiency generally seems morally and ethically good to me. Remembering, that efficiency is always about what you're trying to optimize, and you can optimize somethings at the expense of others. But, in this case, if you've already completed the necessary work for some requirement, it would just be redundant to do again. It would be like re-taking a class you've already completed, or doing your taxes twice. In fact, I'd go as far as saying not resubmitting assignments, in the cases where you can, intuitively seems like an act of supererogation.
But, in the established CU Boulder ethics, it's good to know that not everyone sees it that way. Especially if the people that don't are the kind that have a lot of control over your academic career. So, hopefully this will encourage people like me who usually don't take time to dive in to the details of things like this to have a look.
There's an RSS feed of spoken Wikipedia articles (articles that someone has read aloud, recorded, and uploaded to wikipedia) associated with the Spoken Wikipedia Project that's updated manually (and has only been updated once since 2009). I've wanted to create an automated version for a long time, and got pretty close today using Yahoo! Pipes.
You can subscribe to the proxy'd version of the podcast here:http://feeds.feedburner.com/SpokenWikipediaPodcast
You can view the the pipe here:http://pipes.yahoo.com/pipes/pipe.info?_id=d0aa629e370e6719b16454b892b811ab
It currently does not include articles with recorded versions that are split up in to multiple files (e.g. History of the Earth). It's also restricted by the 30 second time limit Yahoo! Pipes imposes on processing data, so it only gets audio files from the most recently updated articles. To remain patent free, Wikipedia uses Ogg Vorbis (as opposed to MP3) for spoken article recordings, so you may run in to some compatibility issues there.
BTW - Marissa Mayer, if you're reading this, please don't allow Yahoo! Pipes to continue to be neglected while you're CEO. It's one of the coolest things Yahoo! ever did, but it needs more tender love and care.
When I'm tweeting and I have a few spare characters, I try to throw in a relevant hashtag. But, sometimes it's hard to tell which ones people are watching. If you suffer from a similar kind of OCD, I can tell you I've had pretty good luck with checking for popular ones relevant to topics I'm tweeting on using hashonomy.com, tagwalk.com and tagdef.com.
I had setup page2rss to monitor the pages listing the various PinkVERIFY'd Toolsets in January, and just now noticed that Pink Elephant removed their page listing PinkVERIFY'd V2 Toolsets on March 22, 2012 (from the looks of the page2rss changes). If you want to get some idea of what was on that page, the lastest copy from archive.org is from December of 2010.
I created a new pipe that monitors the new Pink Elephant pages.
I watched a lecture Sergey Brin gave to a class, SIMS 141 at UC Berkeley, in 2007. This was probably the lecture that made me love Google. I was very impressed with Sergey, he was funny, equanimous, clever and humble. And, I really liked his answer about the semantic web (16:34).
I think that tagging and semantics are great, as long as the computers are doing the tagging and semantics. Because if people are doing the tagging and semantics for the computers, there's something a little bit inverted about the relationship between man and machine there. I'm a big believer of creating lots of innovative algorithms that can extract this kind of structured knowledge from lots of the text that's out there and created by people all the time. But I'm not a big believer that you're just going to have lots of people who enter the data very carefully so that machines can then process it.
That was 2007, and I still agree with him, but I've also noticed that both schema.org and microformats.org recently celebrated birthdays. I'm also very impressed with the outcomes of the initially heated reception of schema.org, in particular the agreement on using RDFa Lite 1.1. Every site you see with an embedded Facebook like button is using another format, Open Graph Protocal (OGP). There's some interesting analysis from Web Data Commons and Yahoo! Research. The results seem to vary significantly depending on the corpus used.
At the end of the day, I think there is going to be some kind of meeting of humans doing tagging and semantics and machines doing tagging and semantics, depending on the scale and circumstances. Like with my out of work project (and hopefully someday a full-time project), MAPT, it would not be scalable for humans to crawl, extract, and repeart the process to get the information needed. To make it work, it needs a maching learning text extraction approach. But, once you have that data, you would want it tagged somehow. And for sure you would need some amount of human Q&A.
When I decided I wanted to write a blog and host it on rintintin, I needed to find something that ran on rintintin's minimalist set of supported software. At the time that meant it had to run on Solaris 9, work with Apache 1.x, Perl 5.6.1 and didn't require a SQL database backend (although BerkleyDB was installed). Blosxom was one of the few ones that fit the bill, but even back then it's development community had gone a little stale. There was also TWiki that at the time could just be ran with perl, rcs, diff and grep. Anything I compiled had to live in my then 20MB of space, althought I was able to get an increase to 200MB later.
TWiki required compiling the GNU versions of serveral of the base tools (the Solaris 9 versions were incompatible) and although I was able to eventually compile SQLite on rintintin, I believe any blogging software I could find that could use it as a backend had other compatibility issues. Later there was also a split in the TWiki community that created FosWiki, and I'm still not sure which side I like or what the difference is in the software. sunfreeware.com is an amazing site with compiled binaries of common software for different Solaris architecures, but I couldn't find a way to extract the binaries from their format without have root access (it looks like the new site, unixpackages.com may have a saner approach to this). That's a long way of saying I eventually just decided to work with Blosxom.
At anyrate, I wanted to add some common features to my blog this week (comments, and buttons to share on Twitter, Google Plus and Facebook) and I was partially successful. The way Facebook implemented their like button, however, is a touch disappointing as it's difficult to have multiple like buttons on one page for different items, and to get blosxom to do this would require some substaintial rewriting (I tried several hackish approaches that all failed). Disqus is also a little annoying in that you can only have one active comment section using it per page, so you have to go to the permalink for an article if you want to comment.
There have been some attempts to resurrect Blosxom, for example multi.cc had some discussion of a new version. I believe Ode is also based on Blosxom. Do I want to migrate from Blosxom to Ode? You are part of the mystery.
Solutions to advert energy and climate crises are:
A. Inertial electrostatic confinement (IEC) reactors (e.g.
B. Molten salt reactors (MSRs) (e.g. Liquid fluoride thorium reactors [LFTRs])
C. Aneutronic fusion reactors (e.g LPPX)
D. All of the above
E. None of the above
I'm pretty sure E is the wrong answer.
There have been several Google Talks on different kinds on cleaner, safer, greener forms nuclear energy over the last five years. A recent one from Richard Martin on his new book SuperFuel: Thorium, the Green Energy Source for the Future reminded me of another presentation at Google given in 2006 (back when these were on Google Video) that I remembered because of it's excellent use of PowerPoint (none) and because it was about green nuclear energy that sounded promising but that made me wish I remembered more physics and chemistry. It turns out that was about a different kind of green nuclear energy, IEC fusion. Reading a bit about that, you quickly discover the people working on aneutronic fusion. The proposed thorium-based technologies use fission.
If you watch or read any of the sources linked, many people are of the opinion that are current nuclear energy sources are unnecessarily hazardous and inefficient because they're based on the methods used in the development of the early nuclear bombs and that these practicies are now fairly entrenched. This makes it difficult to get funding for research and development in to other options. Not to mention nuclear bombs and accidents at nuclear power plants have shaped public opinion against the option in general, and it will take a kind of marketing campaign to get public support behind that safter, cleaner and greener options.
I don't want the .edu domain to fool you too much here. I'm a computer geek, not a physicist or chemist or nuclear engineer. But I would like to invite anyone reading this to join me in learning more about the re-emerging green nuclear technologies, and to invite others to do the same.(Unrelated: I added Disqus comments, a tweet button, a Like button, and a +1 button for each article. I''m going to be testing them out, so please remember in this case the additional noise is more than just shameless self-promotion)
The Audio Prof, Digital News Test Kitchen, CU Libraries News, CU Money Sense, sciencegeekgirl, Exemplary Support (Chris Bell), Jeeg Salbian, Paul O'Brian University Communications CU ATLAS CU Boulder Career Services
This weblog is not meant to represent the University of Colorado in any respect; the information and opinions contained herein are solely my own.