Posts tagged: Brain Dump

The forest for the trees

comments Comments Off on The forest for the trees
By , October 13, 2009 8:14 am

I wonder what Sir Ken (Robinson) would say about this?

It seems to me that this sort of thing happens far more often than we might think. So many kids think different, or on scales that are not obvious to adults that there are likely at least one of these in every grade level in every school. In defense of the teachers, who are pushed by standardization and deadlines, they don’t have time to think beyond “is student doing X”. If we all had a little more time, we might be able to see some of these creative geniuses for what they are rather than trying to “understand the problem”.

[EDIT]

Rethinking delivery – another example of 50% – or is it?

By , September 30, 2009 1:51 pm

Between getting photos and videos done from the field trip that I took with Science 100 last week and some of the meetings that I’ve had this week, I’ve noticed something has changed in the world. This is an obvious change, but it seems that very few people want to accept it. This change is in the way that we think about delivering information. One of my colleagues put it together perfectly in my last meeting – “we can no longer guarantee delivery”.

The University (mine as well as others) functions on the premise that it can ensure that it has contact with the student. From the assumption that they are going to open the physical mail that offers them acceptance (trapped in the age when the letter was the only way that news traveled, then one could assume that was authority and a guaranteed mode of delivery, think about all the journals that are “Letters of/in”) to the LMS that delivers their course content. It believes it so strongly that it dumps massive amounts of resources to ensure that the message gets out to the student. But for all its efforts, it still fails and seemingly, is failing far more often now than ever before. Why?

Well it seems that nobody told the generation older admins that in a media saturated universe, students won’t or can’t themselves rely on only a single source for their information. Media has to be easily accessed on an enormous range of screen sizes and through a massive range of bandwidths. These are both problems that we thought we had left behind as desktops became more powerful and then were eclipsed by laptops and those by phone and netbooks. Content has to go out over physical media, passive media (radio, TV), active media (email) and social media. If you want to be sure that the students know about an important date, there needs to be an ambient buzz generated within at least two of their content channels and if there is media to go along with it, it should be accessible by almost any device.

I don’t think there is anything here that is ground breaking or anything that regular readers won’t have already thought of or understood. But this seems to be at the heart of many of the problems that instructional designers face when trying to explain why there is no standard cut and dried way to create a course, announce a deadline or show a demo if you want to be sure you get through to your audience. Much like the advertising adage “only 50% of advertising works”, it is as likely as not as you are going to actually connect to your audience so you need to be sure you have more than one way to get to them. Again, not anything that anyone having gone through any form of teacher training won’t have heard in the past when learning about multiple encoding.

So why is this worth posting? Well, being a common sense observation, it isn’t really that common… Observing that with all the options for one to communicate, not everyone is going to use the same set.

When pressing DVDs earlier, I was asked why I was also making stand alone files by two people who were going to be using the media. I explained that this would allow the various users to play them anywhere there is a network connection. The older individual involved didn’t understand the need for the files, the younger, didn’t understand the need for the physical media. When making photos available, another individual was hoping that I would be able to provide thumbnails for easy loading and and an easy way to see the pictures (slideshows etc). This morning, in a meeting, talking about podcasting, LMS and social media, part of the room “got” it and the other didn’t – the kicker, it was a mix of get and not different topics and it seemed independent of generation.

Help! The Interwebs are Broken!

comments Comments Off on Help! The Interwebs are Broken!
By , August 6, 2009 10:25 am

I remember way back in the days when search engine home pages were not reliable, when Yahoo! or MSN was down, people thought the entire net was down because they didn’t know how to input URLs. Now I don’t think much has changed in the last 5-10 years in terms of how many people use the ‘net, except for instead of search engines being the home page/tab, people have Facebook or Twitter as their starting point. Many people don’t even go further than Facebook anymore as the only reason they go online is to catch up with friends. So today’s meltdown (C|Net, Mashable) of Twitter and Facebook was certainly an annoyance for folks (and a reminder of how dependant we are getting) who had their ‘net break this morning. But outside of the traditional web, I think where this might have had a bigger impact was in mobile computing – I wonder how many people who have phones hooked into the two services all of the sudden felt the sensation of being disconnected? In my experience, the biggest drivers for mobile data, at least in my circles, have been related to social networking. People want to be able to go out and meet their friends and share those experiences with those who are not there and store them for those who are.

One of my colleagues in the office suggested that the attack on Twitter might have been some State clamping down so hard that it knocked Twitter down for everyone. I also don’t see that being that far out of the realm of possibility. But if a State did attack a company, who is there to defend them? The “cyber forces” of their home country, or some other body (the UN?). I don’t know but as services become international in use and importance, the defense of these services becomes the interest of the world as well, once again throwing fuel on the fire that is the role of borders on the ‘net.

PS. Be sure to check the update.

More traffic from Facebook, building from Twitter.

comments Comments Off on More traffic from Facebook, building from Twitter.
By , July 25, 2009 9:09 pm

In recent weeks it seems that I’ve received much more feedback from within Facebook on my posts imported via RSS from here. I can’t figure out why there has been this sudden explosion in interest in my writing, but I’m thinking a change in how imported data is presented might be part of the reason. The other part is that there is a extra connection between me and the people who are my friends on Facebook. This personal connection certainly helps as there is a certain “karmic” (for lack of a better term right now) connection as the people reading my stuff know me from “just another blogger”.

The same thing seems to be happening from Twitter as well. People seeing that I’ve got new posts there might be more compelled to read my stuff because they have likely found other unrelated things I’ve posted interesting as well.

It seems that this personal connection is certainly important. In classic media, this was evidenced by the “trusted news people” like Walter Cronkite (RIP). Now with social media, it seems that earning that trust takes much more than doing a great professional job, it means making some manner of personal connections – ala Aston Kutcher.

So endith my brain dump.

Edit – this seems to dovetail nicely into some of the thoughts that Leopoldina Fortunati shared in Edmonton when she gave a talk to the MACT students on gossip and the nature of celebrity, neighbors and family.

What I learned from Twitter this week

comments Comments Off on What I learned from Twitter this week
By , July 21, 2009 12:42 pm

We’ve all seen on the news how Twitter is being used in Iran and elsewhere for people to share information that governments don’t want shared and in disasters to help inform others of the situation. But over the weekend, it was in some ways “my turn”. The storm (that may or may not have included a tornado) which passed through Edmonton caused all manner of damage was being tweeted about so much that #yeg became a trending topic worldwide… and on a weekend to boot. This might be an artifact of the adoption rate in the city, but it certainly was an experience to be able to get updates about what was happening with the storm in different parts of the city far faster than the traditional news channels were getting it. It will be interesting to see how Twitter will continue to evolve over time, but it is also going to be interesting to see what the next service will be that we don’t know that we need right now. Many people say that it will be some manner of audio/video version of Twitter, but I think that won’t be the case. I think the next version of instant news service will also be text based if only to fit within the limitations of our input devices. We might access this service through augmented reality, but it will be populated and still be accessible via text.

Are eBooks the best way to replace physical texts?

comments Comments Off on Are eBooks the best way to replace physical texts?
By , June 9, 2009 11:22 am

The Govenator is saying that California is going to do away with physical textbooks, but is this really going to be a good thing? There certainly is a lot of reasons on the surface that one might assume allow this manner of transition to make sense, but those quickly fall away if you think about what one would really be giving up.

The upside of eBooks is that they will always be current, the downside is that they will only be current for a price and for a user. The last thing that is going to happen is those major publishers releasing content for free and without any DRM. Textbooks on the other hand are not controlled by anyone other than the person flipping the pages and are transferable (should they ever leave the locker/room/bag) to students at will and at minimal cost. eBooks on the other hand are “pay to play” vehicles that have the potential to cripple one of the most basic mechanics of our venerable (and there is another issue… ) education system. Textbooks are not free of their faults either – Seth Godin’s post on this certainly cuts to the quick when it comes to texts, especially in higher ed:

  • They don’t make change. Textbooks have very little narrative. …
  • They don’t sell the topic. …  No one puts down a textbook and says, “yes, this is what I want to do!”
  • They are incredibly impractical. Not just in terms of the lessons taught, but in terms of being a reference book for years down the road.

An article from the EDCAUSE Review goes over many of these ups and downs rather well. But in the closing days of Web2.0, with the availability of all manner of “free” content, it seems to me that this change is in more than just the media that the textbook comes in. This change may also legitimize all the work that those schools with the old or destroyed texts have been doing for so long to make use of resources other than the textbook.

Waving farewell to silos – or are we?

comments Comments Off on Waving farewell to silos – or are we?
By , June 1, 2009 11:55 am

Google’s Wave is supposed to be the answer to “what would email look like if it were invented today”, but the more I hear and read about it, I think that it might be more “what would interactions on the web be like if they were all invented today”? With the understanding that the ideas for Wave come from having lived through early versions and that Wave only looks like what it does because we have elements of old paradigms present and refined. If this were all invented today it might all look like <gets ready to duck><ducks> Twitter <cautiously pops back up> for all we know. But enough of the “if today” conversation, what could come from Google’s latest creation?

To the common, slightly geeky user, I think Wave could really become the ideal social networking tool. Geeks have specialized silos to store all their bits and pieces, they use tools like RSS to pull them into this place or that. As a last resort, geeks can use a direct link to post their content to forums or other systems as needed. These people have likely been hesitant to use Social Networking tools like Facebook for the grey space that used to define who owns what. But to the masses, social networking sites are their silo. They drop all thier pictures, their writing and their personal information there without very much thought. This may be a side effect of having a generation of ‘net users using LMSs at school.

If Google succeeds, then there will be this great world of data that “just is” and through the magic that is Wave, gets connected up into conversations that happen with variable levels of synchronicity to allow for collaboration. But a problem still exists insofar that the geeks will still need to store their data in their specialized silo and the non geeks will still need a general silo. Google’s solution to this might be the “federated” model of servers – which is great… but who is going to want to do it?. I can’t see Facebook all of the sudden thinking that it would be a good idea to use set up a Wave server to allow their users to collaborate with non users and the same is likely true with all manner of other SNSs and other services out there. Of course, if Wave is all that it is purported to be, then these other companies might just make the leap for their users. But what if the common geek and the common ‘net user aren’t the main audience for this service?

If the main design ethos behind this technology was to rethink email and the main users of email are now within corporate and the main place that people would need to collaborate is at work, then the office space might be where one should be looking to see what Wave might be all about. The “enterprise” market is all about communication and collaboration and there are many tools to allow individuals within these corporations to achieve this end. If there is a company that wants to hoard all it’s data, but still wants people to work together, the possibility exists for a “standing wave” (my term/attempt at a pun) that doesn’t move data anywhere outside it’s definition. Other companies might want to come up with an easy way to have teams of people collaborate between silos should the need ever arise, and it seems that Google has that covered as well.

So which enterprise (if any) want to be able to both hoard and share data? Well… I would think Higher Ed (and education in general) would love to be able to share when needed, and protect the rest of the time. Thinking simplistically, universities and school boards are almost all now using an LMS to be able to store most, if not all content related to their mission in one place. This is fine for most of the day to day activities of the enterprise, but not for those odd times for when outside users are needing access. Sure the host instition can provide guest access through many means, but this is a cludge as the user likely has content at the home location that might be useful in the collaboration. Bringing this content over is often a hassle even if both institutions use the same system. Wave might be able to change this.

In Higher Ed, this might mean that researchers would be able to collaborate with ease as “Wave Enabled” (I call TM if nobody else has used this term) institutions would be able to use permissions to allow the guests access to the host institute with ease and then be able to move content back and forth with ease and all through one U/P combination. This of course is one of the more exotic situations that might come about. A far more mundane example would be students taking courses. If Wave is used as the interface to an existing or future LMS (Wave based or not) students will have, for the first time, an LMS that is based around the idea of collaboration and not simply storage and presentation. Micheal Feldstien has a great writeup on this here.

So when all is said and done, we might not be saying farwell to silos, but with tools/ideas like what Google has put forward, we are starting to punch out windows to allow those inside those silos to “Wave”. If you are still interested in other takes on Wave, check out – this writeup on ComputerWorld.

Support the workflow

comments Comments Off on Support the workflow
By , April 30, 2009 1:48 pm

WARNING – this is a long dump to get ideas that are in my head out and to welcome any feedback that might come by.I’ve been thinking about how LMSs should be support. Rather than looking at the software, staff and systems to add more to fill the holes that are identified by a review. Perhaps it would make more sense to look at the work flow and see how it needs to be modified or supported. I believe that this will identify a number of people who are not directly involved with the LMS as being important to the proper functioning of the system. So rather than trying to catch the raging river at the end of it’s course using brute force methods or adding staff and services, it might make more sense to take a look at the head waters of the work flow and supporting change there to make it more manageable downstream.

Continue reading 'Support the workflow'»

Logins and Media Centers

By , January 16, 2009 4:38 pm

Just some quick thoughts that started at the start of the day and dumped to the screen at the end. In the office we have been talking about media centers quite a bit – how to create them, what hardware/software and whatever else. But when we came to comparing the different media centers – Windows Media Center, AppleTV, WDTV, XBMC, Boxee – we came to an interesting conclusion.

People don’t need “secure TV”.

It seems that the software that is the most popular right now are those that don’t handle users – they are those that mimic the way that TV was/is watched before the Media Center arrived. This means that systems like Boxee are probably destined to remain slightly niche products (laptop/computer bound viewers) as it is not likely that there will ever be a situation that people will want to share what they are enjoying with others who are not sitting in the same room. At the most, what is watched may be shared between “locals” – family and friends who you see face to face.

The TV is a common space, so those apps that are not user driven (like AppleTV/WDTV/XBMC), are likely to win out as they simply provide a management/access tool for the media. All these systems need to do is to provide content control and they will be all that is needed.

Enough of a brain dump for now.

My Digital Home Evolves

comments Comments Off on My Digital Home Evolves
By , January 3, 2009 12:42 pm

Back in Oct/Nov I wrote a post on why I didn’t go with the PS3 as a BD player – citing it as overkill for the immediate needs of my home and noting that moving to a digital only format didn’t fit. I think part of the lack of fit was a result of a couple of other situations that have now evolved. The first is storage. Previously my storage situation saw me housing data in a range of firewire drives strung off my now aging PowerMac G5 (Dual 2Ghz), all that storage came up to just over 900GB. Not bad if it was in one volume, but over 6, the free space on the individual drives wasn’t going to be good for much more than scratch, totaling maybe 75GB put together. Most of what I have stored are photos in various stages of restoration, a couple of wedding videos (mine and my brother-in-laws) that are starting to be captured, music, photos and archived copies of previous video work that I had done. So the transition to a digital video as a format used within the home wasn’t really something that made much sense. Sure I could (read would) have upgraded the PS3 HDD to 320GB, but that would still have likely only a portion of all my DVDs available at one time (I think) as the PS3 would not be able to stream. The second of course is need – I didn’t see the need, nor did my wife (and only now is traction being made there), to have free and easy access to all manner of videos and movies anywhere in the house. So what changed? Well, I took a look at the number of photos that I will likely take in 2009 (I’m thinking about 10K), the minutes of video (24h?) and the change in workflow and “place of entertainment”.

The new captures would have needed about 300GB and the change in workflow would need some way to allow everyone in the house to add photos and video into a central store. The first solution that popped into my head… NAS… and then Drobo! Well looking at the price of the Drobo, I started to think that it might be a bit on the high side for what I needed, but it would give me some cool computer like features (torrents and other fun stuff). That got me to thinking… well I can do that on my old tower… and it’s Mac, so getting things going would be pretty easy via simple sharing. So for the cost of just the enclosure, I bought two 1.5TB Drives (Seagate 1.5TB Barracuda – firmware CH1H – and saving me from upgrading like this) from Memory Express including the extended coverage with change left over. So I gained more storage, but I lost redundancy – I know I’m playing with fire as it is “not if, but when” a drive will fail, but I’m sure that I will be able to get some manner of backup solution in place within a year (and it might be Drobo should my baby photo business take off the way that I hope it will).

The “places of entertainment” changed from where the TVs are to many places where they might not be, and we might not always be able to get to the physical media that we would want to watch anyway. Babies will do that right? After ripping the first three seasons of MASH with Handbrake (.91, even though it is now depricated and not .93 that seems to be very unstable – though .92 might work and I might use that to covert the non AppleTV files in batches as well) and dumping off many of the cell phone videos, the “rest of the family” seem to have seen the light. So using the laptops, we are able now to watch any of the ripped or torrented videos anywhere in the house – explaining it as “doing the same thing for DVDs as we did for CDs when we ripped them”.

After adding the additional drives to the Mac (the old towers are not quite tool less and not really that intuitive as the new towers), I found that the g connection that I had to the server was fine to pull down torrents and move small files, but larger files (especially when the DVD drive died the same day the G5 moved to the basement) were a no go now (hello Firewire sneakernet!). It also took a few seconds longer than “natural” to buffer the video than I liked. So I hunted down the DLink (yeah, I know… but I got it for a steal) USB wifi adapter, the DWA-130 and plugged that on to get an almost entirely n network in the house (the Airport Express is g). At first I was a bit worried because the box says that it is Windows only, but a quick Google solved that with Mac drivers.

So you might have noticed that I mentioned torrents up there… and thought… WHA?? He doesn’t torrent – he’s all about buying copies and not pirating. But the way that I’m using the technology is to download copies of DVDs that I already own so I can build my library quickly (with these being replaced with Handbrake AppleTV present rips – so I’m well within what the RCMP would/might consider safe, IMO) and create a “better PVR” as it seems that torrents, at least in Canada are going to be another way to get to content (at least for CBC). To be able to play the torrents, which usually come down as Quicktime unfriendly formates, I installed Perian on the laptops to give them access to .avi, .xvid and .mkv files. Should the time come that the PS3 ever comes down in price (May?), it is already able to play these formats, and should I not want to hack (via some form of XBMC/Plex) an AppleTV if I ever get one I can use Video Drive to move videos into an iTunes library (choosing to manage the files on my own, so that way I don’t loose flexibility).

Unfortunately, the flexibilty that I should have with videos may not work with music, but I am still working on that part as I still have to move all the music from the various machines to the server. I know I can share a common library, but being able to add and manage from more than one place might be a bit of a trick still. I could use something like SuperSync, but I don’t think the three or for times a year that music gets added to the collection warrants this manner of investment. I was also thinking that this same issue would happen with photos (the origin of this adventure).

When testing if I could manage a photo library from many places, I was frustrated to see that Aperture doesn’t allow the library to be accessed over a network drive (I think it is a permissions thing… so I could have figured it out – and might still – but for now, FW externals will still do the trick, now that they have more free space), but very pleased to see the iPhoto just play fine over the network and allows for a simplified import system for photo and video into the server for everyone.

Looking back over the sharing of music, videos and photos, I came to the realization that the family will still need to “grow into” this new system, so the compromises are acceptable. With the music, I figured having independant libraries is still a good thing because the laptops are mobile and they might have to go elsewhere… binding to a server for all music would have lost one of the key benefits of the laptop. When it comes to videos, compressed files are easy enough now to pull off the network and take along when the need comes (freeing space in the diaper bag :)) and photos… well aren’t they bound to the network now?  Well yes… but those that we will want to share are shared via email… and that is still easy enough to access anywhere.

Since this is an evolution, we have to start somewhere right? And as such, I’ll be chronicling how this setup evolves and on the next post (or sometime soon), I’ll try to put out some ideas as to how/why one might want to do this to help support their own classroom/research lab… even though the IT folk already have a network in place.

So endith the brain dump.

Panorama Theme by Themocracy