Friday, January 23, 2015

Interview for Elements Academy of Martial Arts

I was recently interviewed for this blog post about the Elements Academy of Martial Arts Brazilian Jiu-jitsu program: Martial Arts in Your 40s and Why Jiu-Jitsu is Right for You

Update: Some of their posts have been removed during spring cleaning I assume.

image

We train with instructor Todd Smith, who is the only black belt under Royce Gracie in Canada. Great club, great people. Check it out. Gracie Jiu-jitsu is amazing--even for us old folks :)

Monday, December 08, 2014

Chromebook 2 Feedback and Review

Now that I've had my Chromebook 2 for a reasonable amount of time, I thought I'd post some more details about my experience.

chromeos

In general, I'm still thrilled with the machine. It's an inexpensive computer with a full laptop typing experience. The audio and video are good, the battery life and charge times are fantastic, and so much is available via web apps these days that I can almost use it for everything. I also like that the operating system is simpler than a traditional desktop O/S. For all the reasons I mentioned in my previous post (Why I Just Bought a Chromebook), I think it's a fantastic device and I'm recommending it to people.

My issues pretty much stem from the fact that Chrome OS has a tiny web app store at this point. There are two ways to fix these issues, either more web apps in the Chrome Web Store or enabling Android (Google Play) apps on Chromebooks. There are rumours of Google Play capability coming to Chromebooks, but I'm not optimistic that will happen on my Chromebook 2. However, I sincerely hope my pessimism is poorly placed.

Here are the main issues I've encountered:

1. GoToMeeting doesn't work (even the web version only allows you to listen--you can't share your video or talk). This is a deal breaker for using my Chromebook as my only computer. Whether it's GoToMeeting or WebEx, I have to be able to participate in meeting using the technology that my employer and customers are using. Yes, Google Hangouts and Sqwiggle work--that's great--but it doesn't solve the problem. (As I pointed out in my last post, I'm not trying to replace my phone with my laptop and I can actually use WebEx or GoToMeeting on my phone, so that mitigates this problem.)

2. Skype doesn't work. This isn't a work problem for me, so I'm putting it in a different point. I use Skype with family, so I want it on my laptop. (Again, I can use this on my phone.)

3. Torrent files. I'm working on this one, but I have yet to find a good solution for downloading files via torrents. There are apps in the Chrome Web Store, but I tried one and it just didn't work. I'll have to try another. There's one that costs a few dollars--I might have to resort to actually paying for an app.

4. Heat. I'm not actually using a thermometer, but the Chromebook feels hot to me when I'm using it on my lap. By comparison, my wife's MacBook Air doesn't seem to get as warm, but my old Acer laptop is actually hotter than the Chromebook.

5. Google Cloud Print. I was pleased to learn about the Google Cloud Print option. It allows me to print to a printer connected to another machine (because you can't install print drivers on Chrome OS). However, it simply doesn't work very well. My printer isn't that old (it's wireless), but printing with Google Cloud Print results in such bad results that it's almost useless. For example, it's common for my documents to print with the last few characters of every line cut off.

6. SD Card. This is a minor annoyance, but every time I open my Chromebook, I'm told to safely remove my SD card--not on reboot mind you, but literally every time I login. I don't want to remove the card and I shouldn't get that error all the time. I just ignore it, but I'm not the only one seeing this issue and I hope it gets resolved in an update.

Saturday, November 08, 2014

Why I Just Bought a Chromebook

I’m sure some people would see the title of this post and say, “Why wouldn't you buy a Chromebook? They’re a steal!” Ultimately, I agreed with this view and bought one. However, I went through a period of uncertainty when I looked to replace my current laptop. Here are the main factors that swayed me in favour of the Chromebook 2.

toshiba chromebook 2 geeklit cawood blog

But before I get to that, I think it’s important to mention that I took my time with this decision. What else did I consider? I weighed the pros and cons of many different options: MacBook Air, MacBook Pro, Windows laptop, Ubuntu laptop, Microsoft Surface, Windows tablet, Android tablet, and iPad. We already have a MacBook Air, Windows desktop and iPad in the house, so that also played into my decision.

1. Affordability -- Chromebooks are remarkably affordable. Unlike a MacBook Air, or a Surface 3, they're cheap. I bought my new Toshiba CB35-B3340 Chromebook 2 for $329.99. That's about a third of what I would have paid for the others. Even if it turns out that I don't like the Chromebook, I've made a small investment to find that out.

2. Getting my work done -- I don't care about brands very much and I'm not a zealot when it comes to technology. I just want to get my work done and spend more time with my wife and daughter. If I can do everything I need to do on the Chromebook, I'm happy. These days, I spend most of my time either in email and Word documents. I can use web versions of Outlook, Gmail and Word to do all that. If I need something else, I can remote into my Windows Server at home or my Windows desktop at work, or just use my Wife's MacBook Air. An obvious example is Visual Studio for coding. However, these is a web version that I can try even for that. I'm very curious what the dev experience will be like on a Chromebook.

3. I’m not replacing my phone -- While it's true the Chrome App Store doesn't offer the millions of apps in the iOS Store, or the Google Play store, or even the Windows App Store, that's not what I need in a laptop. I have my phone with me at all times and I don't need to send text messages or play Flappy Bird on my laptop. (BTW -- Angry birds is in the Chrome App Store)

4. ‘Traditional’ operating systems are too much work -- I was already thinking this way, but a few weeks ago, I picked up a Windows 8 tablet I use at work and found that it was running Windows Update. I just wanted to write a note in OneNote, and I couldn't because the operating system is huge and powerful and therefore takes time to update. This is just one example of the ways that large traditional operating systems (Windows, MacOS, Linux) are just too much work for my use case (see "Getting my work done" above). Chrome O/S is simple and I like the sound of that.

Update: Now that I have my Chromebook (I'm writing this update on it), I can say that I'm impressed. I ran into a minor issue during set up (Chromebook setup freezes at Determining Device Configuarion) but restarting was enough to quickly resolve the issue. Since then I haven't hit any hurdles and I'm really enjoying the experience.

Thursday, October 30, 2014

Chromebook setup freezes at "Determining Device Configuration"

Just a quick note about this setup problem that a few new Chromebook users have encountered. During the initial setup phase, the message "Determining Device Configuration" appears and the setup process stalls.

Apparently, one person waited four hours in this state. I guess I don't have that kind of patience. I simply shut down the computer (after about 15 mins), restarted it a few seconds later and the setup process completely immediately. If this doesn't work for you, you can search for info about hard reseting your Chromebook.

Tuesday, September 30, 2014

Leaving the Microsoft MVP Program

In a few days, my time as a Microsoft Most Valuable Professional (MVP) will come to a close. Sad, yes I know… OK, not really so much sad as the natural evolution of things. I first received the MVP award a mere four days before my daughter was born. Even back then, I knew that I would have to eventually bow out of the program. Of course, there are people in the program who successfully juggle their job, home life, and somehow still manage to be excellent MVPs, but I don’t feel I’m one of them, so I’ve asked not to be renewed this year.

The MVP program is excellent and I’ve enjoyed being a part of it. Most of all, I’ll miss going to the MVP Summit and hanging out with the other MVPs. There are other benefits of course—access to early information, free software—but those don’t match the amount of effort it takes to get the award. As I describe in the post mentioned below, you really have to doing community work because you love doing community work—otherwise the time investment just doesn’t make sense.

I already sent the obligatory “so long and thanks for all the fish” message to my fellow MVPs. I wish you all well and keep up the excellent work!

Microsoft MVP Banner geeklit cawood

BTW -- If you don’t know what the  Microsoft MVP award is, here’s some content from a post I wrote called How do I Become a Microsoft MVP?

First of all, if there is only one thing to remember about the MVP award, it’s this: the Microsoft Most Valuable Professional (MVP) award isn’t a certification. There are no set criteria or steps that someone can take to become an MVP. As the name implies, it’s an award.

“This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in SharePoint Services technical communities during the past year.”
-
Microsoft

In practical terms, this means helping with community-focused resources such as contributing to newsgroups, speaking at conferences, writing/blogging about your subject (e.g., SharePoint) and contributing code to CodePlex (an open source site).

Microsoft MVP Logo geeklit cawood

Wednesday, August 27, 2014

My ALS Ice Bucket Challenge Video

Here is my ALS Ice Bucket Challenge video. I was challenged by my good friend and advocate for ALS fundraising efforts, Rasool Rayani.

You can contribute online to the Walk for ALS (ALS society of BC).

"Also known as Lou Gehrig's disease, amyotrophic lateral sclerosis is a progressive neurodegenerative disease that affects nerve cells in the brain and the spinal cord. There is no cure and only one medicine to slow its progress has been approved."
http://www.cbc.ca/news/business/ice-bucket-challenge-brings-millions-of-dollars-to-battle-als-1.2739663

Thursday, July 10, 2014

MVPs and the Evolution of Windows

I’ve been mentioned on the Microsoft MVP Award Program blog. The post is about MVPs and the evolution of Windows.

From the article:

For more than two decades, MVPs have been on the forefront of helping people around the world make the most of their Microsoft technologies, including its centerpiece, the Windows operating system, from Windows 3.1 to Windows 8.1 and all the releases in between. Today, with Microsoft’s new rapid release cadence, their expertise is more important than ever…

In Canada, five MVPs produced guidance for the Windows XP End of Life campaign. Here are some highlights:

Brian Bourne | Windows XP End Of Support – Mitigating the Security Concerns
Yannick Plavonil | Chères entreprises, réveillez-vous Windows XP c’est fini dans un mois!
Sean Wallbridge | I Love My Surface Pro 2 and Windows 8.1
Colin Smith | XP End of Support – Do You Need New Hardware?
Stephen Cawood | Windows XP End of Life Is Coming

Turn off Image or Video Prompt in Snagit 12

If you haven’t tried Snagit, you should check it out. It’s much better than relying on the old school screen capture mechanisms. When I was taking screenshots for my books, Snagit saved me a great deal of time. As just one example, try capturing an open dropdown menu using just Windows screen capture shortcuts—it’s like trying to catch a greased pig.

I just upgraded to Snagit 12 and found that every time I take an image capture, I’m prompted to select either a video or image capture (see the big blue buttons in the screenshot below). I rarely use the video capture, so this is a waste of time. Here’s how to turn it off.

image

First, check if you’ve got an old version of Snagit installed. I had 11 and 12 installed and was not getting the behaviour I wanted. I had to uninstall Snagit 11 to get rid of the old editor.

Next, go to the preferences window and check the hotkey associated with the default capture profile. I like to use print screen, so I’m going to change the “Global capture” shortcut key to something else. To get to the preferences window, you can right-click the icon in the task menu (bottom right corner of your screen with the hidden icons), then choose Preferences and the Hotkeys tab.

image

After I’ve freed up the right shortcut key, I’m going to associate that key (PrtScn) with a profile that doesn’t ask if I want to take video.

To do this, choose the “Send to Clipboard” profile from the Manage Profiles dialog, and then click the Hotkey button at the bottom to associate this with the keyboard shortcut you’d like to use. Note that I wanted to go directly to the Snagit editor, so I have that enabled.

image

Now I’m back to the lightning fast capture that I’ve come to love. When I use the hotkey, I go straight to the Snagit editor. If I wanted, I could add styles (such as borders) as part of the profile and they would be automatically applied.

image

p.s., If you’re wondering how I captured the Snagit windows. I cheated. I used Alt+PrtScn to do it old school.

Wednesday, June 18, 2014

Aquatic Informatics Blog Post: AQUARIUS: Where Smart Water Data Meets the SMART Tunnel

Check out my first blog post on the Aquatic Informatics blog: AQUARIUS: Where Smart Water Data Meets the SMART Tunnel

In this post, I discuss the engineering feat that is the SMART (Stormwater Management and Road Tunnel) Tunnel. It stretches 9.7 km beneath Malaysia’s capital and function both as a roadway as a storm drainage system.

SMART Tunnel Aquatic Informatics Blog by Stephen Cawood

- Image Courtesy: ENTURA, Hydro Tasmania

From the post:

The “Stormwater Management And Road Tunnel” (SMART Tunnel), has been featured in many news stories and television shows such as Extreme Engineering on the Discovery Channel, Build It Bigger on the Science Channel, Megastructures on the National Geographic Channel, and Man Made Marvels on the Science Channel. The reason the Kuala Lumpur tunnel is so amazing starts with its length: it is the second longest storm drainage tunnel in Asia at 9.7 km (6.0 mi). The truly amazing aspect of the SMART tunnel is that it’s the first dual-use storm drainage and road structure in the world. The tunnel is mainly used to ease traffic congestion in Kuala Lumpur, but there is a storm drainage system under the two decks of cars. Essentially, the tunnel acts as a reservoir to decrease the peak of water flows of the Klang river and therefore mitigate the severity of floods. However, that’s not all. In the event of a bad flood, the road decks can also be used to drain the storm water away from the city. That’s right. The car levels can be completely filled with water as well!

Update: It was recently announced that a similar tunnel project is planned for Jakarta:

Jakarta Announces Plan for Integrated Tunnels to Manage Traffic and Floods

Thursday, May 29, 2014

Windows XP End of Life Update: Win a Microsoft Gift Card for Commenting on This Post

A while back, I wrote a post about the Windows XP End of Support date (yes, it has already passed). Microsoft has generously donated a Microsoft Store Gift Card to help get the message out that it's time to upgrade.

Leave a comment below and let me know what you think is the single most important reason to upgrade from Windows XP, or your favourite new feature of Windows 8.1 and you will be entered to a random draw for a $100 Microsoft Store Gift Card. Winners will be selected on July 15th.

Wednesday, April 09, 2014

Add Wikipedia App to Microsoft Word 2013

Office 2013 and SharePoint apps are great, but I have to admit that I haven’t done much with them yet. Today, I found myself wondering what I’d have to do to get Wikipedia to open when I right-clicked on a word or phrase in Word 2013.

image

To add Wikipedia, all I had to do was choose Insert from the Ribbon menu at the top of the screen and then click the Wikipedia icon. If you click “My Apps,” you can navigate to the Office 2013 app store and check out what other interesting apps are available.

image

Once you click the icon, you’ll need to confirm that you trust this new app.

image

After you’ve trusted the app, you can now quickly look up words or phrases from your Word documents in Wikipedia. Very cool!

image

Switching the app that opens by default when you choose the “Define” option is not obvious. To choose a different app, you first need to hide the Wikipedia app my clicking “My Apps” in the Ribbon and then clicking “Manage My Apps.”

image

For example, let’s say you wanted to use the Bing dictionary for the Define option. You can add the Bing app, then choose Manage My Apps—which opens a web page—and hide Wikipedia from that page. Once you hide an app, you should be able to click the “Refresh” button in the My Apps view and see that the app is no longer there.

After you’re done, you can unhide Wikipedia. Note that some people have reported that they had to restart Word after hiding Wikipedia.

I should mention that it’s also possible to have multiple apps running. In the screenshot below, I have both the Bing dictionary and the Wikipedia app open.

image

Saturday, March 22, 2014

U.N. World Water Day 2014

Today is U.N. World Water day!

My smartass Twitter comment was that I’ll spend the day reflecting on things, but seriously… water is our most precious resource and we really need to start taking care of it. Since joining Aquatic Informatics software, I’ve had much more exposure to stories of water-related issues around the world. We’re working to provide the software the world needs to make smart water policies based on actual data rather than speculation or lobbying. We need to let science decide how we can provide better access to clean drinking water and more widespread use of renewable resources for power generation.

From the U.N. Water Day Facebook page:

“Did YOU know that… by 2035, the global energy demand is projected to grow by more than one-third and demand for electricity is expected to grow by 70%.

22nd of March is World Water Day. The theme for this year is Water & Energy.
Share this picture and help us raise awareness about UN-Water World Water Day

UNWorldWaterDay

Wednesday, February 26, 2014

Windows XP End of Life is Coming

It's time to move on. Although Windows XP was an important release when it came out, and a surprising number of people and organizations are still running it, it's time to upgrade.



If you know someone still running XP, you should remind him or her in no uncertain terms that 2001 is over. On April 8th, 2014, Windows XP support will end. After that date, there will not be any updates--most notably security updates--for the O/S that should probably be called Windows Classic at this point.

One of my responsibilities at Aquatic Informatics is the supported systems matrix for our software. We had deprecated support for Windows XP a couple of releases ago, but as of our next release, we'll officially be dropping support. There's no doubt that most software companies have already done the same or will be dropping XP support in the near future. So boldly step into the present and take a look at Windows 8.1.

Update: Microsoft has generously donated a Microsoft Store Gift Card to help get the message out that it's time to upgrade.

Leave a comment on my update post and let me know what you think is the single most important reason to upgrade from Windows XP, or your favourite new feature of Windows 8.1 and you will be entered to a random draw for a $100 Microsoft Store Gift Card. Winners will be selected on July 15th.

Thursday, January 30, 2014

The Mystery of Why People Use SharePoint

I had to laugh when I read this blog post from the Calgary Herald: Is SharePoint a pain point? Maybe it’s time to ditch it. They used the image below (slightly modified here) to distance themselves from the "righteous" supporters of SharePoint.

image

Of course SharePoint isn't perfect, but if you require some features that SharePoint provides, then you should run a pilot and try it out; base your decision on your actual use cases. This quote from a white paper that I co-wrote with Colin Spence (he wrote this particular text) sums it up nicely:

“If end users already know how to use SharePoint and have received training on the key tools provided by SharePoint the organization can be more ambitious in the implementation. On the other hand if end users are completely unfamiliar with SharePoint and in general not open to change and not willing to take training, IT should carefully control the complexity of the SharePoint configuration. Projects can be seen as “failures” if a few important end users complain that the SharePoint solution is ‘too complicated,’ or ‘too time consuming.’”

It's a classic case of people believing that there is a perfect system out there and all they need to do is pick the right one. It's simply not reality.

Wednesday, December 18, 2013

Installing Ruby on Rails in Ubuntu 12.04 on Azure

In previous posts, I covered installing Ubuntu on Windows Azure, remotely accessing Linux from Windows Azure, installing LAMP and installing Git on Linux and a few other topics. The next subject I’ll be writing about is the Ruby on Rails web development platform.

There are some great resources out there for learning Ruby and Rails, and when the install goes smoothly, it’s very easy. I’m not an expert on Ruby, but the install didn’t go swimmingly for me, so I’m writing this post.

Step 1 – Install RVM and Ruby

\curl -L https://get.rvm.io | bash -s stable --ruby

Ruby Version Manager (RVM) is a great tool for working with Ruby. You can even run multiple versions of Ruby and easily switch back and forth.

image

Step 2 – Install Rails

Finally, you can use RubyGems to install rails:

gem install rails (may need sudo, I had to run it with both, see note below)

Note: if you see the error, ERROR:  Error installing rails: activesupport requires Ruby version >= 1.9.3.”, (or some other version) install Ruby 1.9.3 (Yes, even if you have a newer version installed) then use RVM to set the old version as the default using: rvm use ruby-1.9.3. This caused me much angst today. You can use ruby –v to see which version you’re running and rvm list to see the installed versions.

I also ran into this ugly error:

cawood@cawood:~$ rails --version
/usr/lib/ruby/1.9.1/rubygems/dependency.rb:247:in `to_specs': Could not find railties (>= 0) amongst [activesupport-4.0.2, atomic-1.1.14, bigdecimal-1.1.0, bundler-1.3.5, bundler-unload-1.0.2, executable-hooks-1.2.6, gem-wrappers-0.9.2, i18n-0.6.9, io-console-0.3, json-1.5.5, minitest-4.7.5, minitest-2.5.1, multi_json-1.8.2, rake-0.9.2.2, rdoc-3.9.5, rubygems-bundler-1.4.2, rvm-1.11.3.8, thread_safe-0.1.3, tzinfo-0.3.38] (Gem::LoadError)
        from /usr/lib/ruby/1.9.1/rubygems/dependency.rb:256:in `to_spec'
        from /usr/lib/ruby/1.9.1/rubygems.rb:1210:in `gem'
        from /usr/local/bin/rails:18:in `<main>'

I had to run gem install rails (with no sudo) to get all the gems to install properly. After I did that, I could see that Rails was installed properly:

cawood@cawood:~$ rails --version
Rails 4.0.2

Of course, my list of installed gems was much longer because the missing gems had been installed. It should look like this:

cawood@cawood:~$ gem list

*** LOCAL GEMS ***

actionmailer (4.0.2)
actionpack (4.0.2)
activemodel (4.0.2)
activerecord (4.0.2)
activerecord-deprecated_finders (1.0.3)
activesupport (4.0.2)
arel (4.0.1)
atomic (1.1.14)
bigdecimal (1.1.0)
builder (3.1.4)
bundler (1.3.5)
bundler-unload (1.0.2)
erubis (2.7.0)
executable-hooks (1.2.6)
gem-wrappers (0.9.2)
hike (1.2.3)
i18n (0.6.9)
io-console (0.3)
json (1.5.5)
mail (2.5.4)
mime-types (1.25.1)
minitest (4.7.5, 2.5.1)
multi_json (1.8.2)
polyglot (0.3.3)
rack (1.5.2)
rack-test (0.6.2)
rails (4.0.2)
railties (4.0.2)
rake (0.9.2.2)
rdoc (3.9.5)
rubygems-bundler (1.4.2)
rvm (1.11.3.8)
sprockets (2.10.1)
sprockets-rails (2.0.1)
thor (0.18.1)
thread_safe (0.1.3)
tilt (1.4.1)
treetop (1.4.15)
tzinfo (0.3.38)

Alternate Method—This didn’t work for me on Ubuntu 12.04

Step 1 – Install Ruby

You can either install the standard version: sudo apt-get install ruby-full build-essential
or the minimal requirements with: sudo aptitude install ruby build-essential libopenssl-ruby ruby1.8-dev (note the version number – you’ll have to update that).

I’m going with the first option to install ruby-full.

image

Step 2 – Install the Ruby Version Manager

CAUTION: Normally you would just run: sudo apt-get install ruby-rvm

However, there is an issue with the Ubuntu RVM package, so you should run this instead: \curl -L https://get.rvm.io | bash -s stable --ruby --autolibs=enable --auto-dotfiles

If you do run into issues with RVM. For example, using rvm use doesn’t change to the version you want, you can run these commands to clean your system (see this thread).

sudo apt-get --purge remove ruby-rvm
sudo rm -rf /usr/share/ruby-rvm /etc/rvmrc /etc/profile.d/rvm.sh


Then, open new terminal and validate environment is clean from old RVM settings (should be no output):



env | grep rvm


Step 3 – Check that You’re Using the Latest Version of Ruby



ruby –v will show you which version you’re using.



If you’re not using the one you want, you can use RVM to upgrade. This will download the source that is then used by RVM in the next step to compile and install Ruby; it is not a quick command.



sudo rvm install ruby-1.9.3-p125



or



sudo rvm install ruby-1.9.3 (Note version number)



However, if you do the second option, you may need to workaround an issue before updating. If you try to install ruby-1.9.3, you might get the error: “ERROR: The requested url does not exist: 'ftp://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-.tar.bz2' from RVM. You can workaround this by downloading the package yourself.”



sudo curl -o /usr/share/ruby-rvm/archives/ruby-1.9.3-.tar.bz2 \http://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p0.tar.bz2



Check that you’re using the latest Ruby: ruby -v

If not, switch to the latest using RVM: rvm use 1.9.3



Step 4 – Install Rails



Finally, you can use RubyGems to install rails:



sudo gem install rails



Step 5 – Install a Web Server



I use Apache and MySQL, so the LAMP install works for me, but there are other options such as WEBrick or Lighttpd.






Other posts on this topic:


RubyOnRails.org getting started


Ubuntu documentation – Ruby on Rails


Thursday, December 05, 2013

Using Wyzz Web-based HTML editing control in ASP.NET MVC

I recently had to put out a web-based single page application (SPA) on short notice. To make that happen, I knew I had to use some open-source controls. One was the jsTree treeview control (which I wrote about on this blog - Using jsTree with ASP.NET MVC) and another was the Wyzz WYSIWYG web-based editing control for HTML.
From their site, “Wyzz is an ultra-small, very light WYSIWYG (What You See Is What You Get) Editor for use in your web applications. It's written in JavaScript, and is free (as in speech and as in beer) for you to use in your web applications and/or alter to your needs (see the license conditions).
image
Naturally, the first step to add a reference to the wyzz.js script file. Once you have that, you just need to add the control to an HTML <textarea> element. Finally, it’s a simple matter of adding some JavaScript to “make_wyzz” the control.
<script language="JavaScript" type="text/javascript" src="~/Home/wyzz.js"></script>



<textarea name="textEditor" id="textEditor" rows="10" cols="40">No file loaded...</textarea><br />
<script language="javascript1.2">
    make_wyzz('textEditor');
</script> <div ng-controller="EditorCtrl"> <form novalidate class="simple-form"> <button ng-click="saveFileContent()">save</button> </form> </div>

As you can see in the example above, I’ve chosen to use an AngularJS control to define the behaviour of the save button. In the JavaScript I define a server-side controller function (ASP.NET in this case) and I send it the content of the control by accessing the HTML element that the control is using.

$scope.saveFileContent = function () { 
        $http.post('/Home/SaveFileContent', { filePath: document.getElementById("multilingualfile").innerHTML, content: document.getElementById("wysiwyg" + "textEditor").contentWindow.document.body.innerHTML, title: document.getElementById("titleHtml").value })
            .then(
            function (response) {
                alert("File Save Result: " + response.data.Result);
            },
            function (data) {
                alert("Error saving file content");
            }
        );
    }

Update: Here’s the basic format of the server-side part:


[HttpPost]
public ActionResult SaveFileContent(string filePath, string content, string title)
{
    try
    {
        ...
        
        return Json
            (
                new
                {
                    Result = "Success",
                }
            );
    }
    catch (Exception ex)
    {
       ...

        return Json
            (
                new
                {
                    Result = "Error saving content: " + ex.ToString(),
                }
            );
    }
}

To customize your Wyzz controls, you can edit the wyzz.js file. If you have any issues, refer to the Wyzz discussion forum.

Sunday, December 01, 2013

Using jsTree with ASP.NET MVC

When I wanted to use a pure JavaScript treeview control for a recent ASP.NET MVC5 project, I looked around and found jsTree; it’s a popular and rich solution, so I decided to try it. I ran into a few customization hurdles, so here are my lessons learned.

Note that this is for jsTree 1.0; at the time of writing, 3.0 has not been released.

Step 1: The HTML in the view. Pretty simple…

<div id="FileTree"></div>


Step 2: Loading the tree dynamically from the MVC controller using jQuery.

<script type="text/javascript"> 
// Begin JSTree: courtesy Ivan Bozhanov: http://www.jstree.com:

$('#FileTree').jstree({
"json_data": {
"ajax": {
"url": "/Home/GetTreeData",
"type": "POST",
"dataType": "json",
"contentType": "application/json charset=utf-8"
}
},
"themes": {
"theme": "default",
"dots": false,
"icons": true,
"url": "/jstree/themes/default/style.css"
},

"contextmenu": {
"items": {
"create": false,
"rename": false,
"remove": false,
"ccp": false,
}
},

"plugins": ["themes", "json_data", "dnd", "contextmenu", "ui", "crrm"]
})

</script>


Step 3: Server-side code to populate the tree. This code is based on desalbres’s Simple FileManager with jsTree. (The model code is below.)

// Begin JSTree (Controller code courtesy desalbres: http://www.codeproject.com/Articles/176166/Simple-FileManager-width-MVC-3-and-jsTree)
[HttpPost]
public ActionResult GetTreeData()
{
if (AlreadyPopulated == false)
{
JsTreeModel rootNode = new JsTreeModel();
rootNode.attr = new JsTreeAttribute();
rootNode.data = "Root";
string rootPath = Request.MapPath(dataPath);
rootNode.attr.id = rootPath;
PopulateTree(rootPath, rootNode);
AlreadyPopulated = true;
return Json(rootNode);
}
else
{
return null;
}
}

/// <summary>
/// Populate a TreeView with directories, subdirectories, and files
/// </summary>
/// <param name="dir">The path of the directory</param>
/// <param name="node">The "master" node, to populate</param>
public void PopulateTree(string dir, JsTreeModel node)
{
if (node.children == null)
{
node.children = new List<JsTreeModel>();
}
// get the information of the directory
DirectoryInfo directory = new DirectoryInfo(dir);
// loop through each subdirectory
foreach (DirectoryInfo d in directory.GetDirectories())
{
// create a new node
JsTreeModel t = new JsTreeModel();
t.attr = new JsTreeAttribute();
t.attr.id = d.FullName;
t.data = d.Name.ToString();
// populate the new node recursively
PopulateTree(d.FullName, t);
node.children.Add(t); // add the node to the "master" node
}
// loop through each file in the directory, and add these as nodes
foreach (FileInfo f in directory.GetFiles("*.htm"))
{
// create a new node
JsTreeModel t = new JsTreeModel();
t.attr = new JsTreeAttribute();
t.attr.id = f.FullName;
t.data = f.Name.ToString();
// add it to the "master"
node.children.Add(t);
}
}

// Don't load the jsTree treeview again if it has already been populated.
// Note: this causes a bug where the tree won't repaint on browser refresh
public bool AlreadyPopulated
{
get
{
return (Session["AlreadyPopulated"] == null ? false : (bool)Session["AlreadyPopulated"]);
}
set
{
Session["AlreadyPopulated"] = (bool)value;
}

}
// End JSTree

First I had to resolve the issue that a browser refresh would repaint the whole treeview. It’s possible that I simply missed this when I cherry picked code from the FileManager codeproject example.


public ActionResult Test(string returnUrl)
{
ViewBag.ReturnUrl = returnUrl;
Session["AlreadyPopulated"] = false;
return View();
}

Next, I had to customize jsTree the way I wanted it to behave. Getting the tree to start collapsed (closed) instead of expanded (open) was the first order of business. The jsTree API took care of the problem.


$('#FileTree').bind("loaded.jstree", function (event, data) { 
$(this).jstree("close_all");
})


Next, I wanted the leaf nodes to use a different background image than the folder nodes. This required changing the server-side code to actually write the leaves (files) as leaf nodes and then add the right CSS to style the jstree-leaf class.



namespace FileEditor.Models 
{
public class JsTreeModel
{
public string data;
public JsTreeAttribute attr;
// this was "open" but changing it to “leaf” adds “jstree-leaf” to the class
public string state = "leaf";
public List<JsTreeModel> children;
}

public class JsTreeAttribute
{
public string id;
}
}


And then styling the leaf nodes with a different background image than the folders.



<style type="text/css"> 
#FileTree .jstree-leaf > a > ins {
background: url("/jstree/themes/default/d.gif");
background-position: -2px -19px !important;
}
</style>


Finally, I wanted to disable the right-click context menu options since I’m not using them. (This code appears in the code above.)

"contextmenu": {
"items": {
"create": false,
"rename": false,
"remove": false,
"ccp": false,
}
},

That’s it. jsTree is not working the way I want. I expect that version 3 will be great when it is released.

Other posts on this topic:
jsTree – Few examples with ASP.Net/C#
Simple FileManager width MVC 3 and jsTree

Wednesday, November 27, 2013

Continuous Deployment to Azure Web Sites via Visual Studio 2013 and VS Online

In principle, I appreciate the value of test-driven development (TDD) and continuous integration (CI) builds; I just didn’t think they would be entirely practical for my solo, side projects. Well, I don’t have any excuses anymore, I’ve been working on a quick project and in just a few hours, I had an AngularJS single page application (SPA) remotely building and deploying to an Azure web site. All for free. Every time I check-in a change, the source is copied to the cloud, my test runs and it triggers a CI build remotely and then is deployed automatically to the host server. Pretty slick.

image

One change was that testing web applications has come along from the dark days of having to wrangle with the HTTP context. Maybe the situation isn’t totally resolved, but it’s easier these days to write tests because the HTTP context isn’t so much of a stumbling block in newer frameworks.

I only ran into two issues setting everything up, and one was pretty much my fault. First, I created an ASP.NET MVC project using Visual Studio 2013 and added it to Visual Studio Online (which supports both Git and Team Foundation Services [TFS] online). As I wrote in a previous post, you can get free private Git repositories via Visual Studio Online.  I also created an Azure website and linked it to the project. I created a CI build profile and the build triggered correctly on check-in. No problems, no errors. However, the project was not being deployed. The issue was simple, I chose the wrong deployment template. I should have been using the AzureContinuousDeployment.11.xaml template.

The other issue was that the Visual Studio Online build machine wasn’t in sync with the NuGet updates, I had installed on my local box. To resolve these issues with package management, I Enabled NuGet Package Restore feature and then check-in the entire “packages” directory to resolve missing DLL errors.

To get this working for your project, you can start by reading the MSDN article about a continuous deployment setup.

BTW - If you don’t check in the package files, you’ll get error such as these (and more) because the remote build machine won’t be able to find the DLLs. To help with search, I'll paste in a bunch of the errors here.

Models\AccountModels.cs (11): The type or namespace name 'DbContext' could not be found (are you missing a using directive or an assembly reference?)App_Start\WebApiConfig.cs (5): The type or namespace name 'Newtonsoft' could not be found Controllers\AccountController.cs (7): The type or namespace name 'DotNetOpenAuth' could not be found Controllers\TodoController.cs (3): The type or namespace name 'Infrastructure' does not exist in the namespace 'System.Data.Entity' Controllers\TodoListController.cs (4): The type or namespace nam 'Infrastructure' does not exist in the namespace 'System.Data.Entity' Filters\InitializeSimpleMembershipAttribute.cs (3): The type or namespace name 'Infrastructure' does not exist in the namespace 'System.Data.Entity' Areas\HelpPage\SampleGeneration\HelpPageSampleGenerator.cs (14): The type or namespace name 'Newtonsoft' could not be found (are you missing a using directive or an assembly reference?) Models\TodoListDto.cs (6): The type or namespace name 'Newtonsof' could not be found Models\TodoList.cs (6): The type or namespace name 'Newtonsoft' could not be found Models\TodoItemContext.cs (16): The type or namespace name 'DbContext' could not be found Models\AccountModels.cs (18): The type or namespace name 'DbSet' could not be found Models\TodoItemContext.cs (23): The type or namespace name 'DbSet' could not be found C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "EntityFramework". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors.C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "DotNetOpenAuth.OAuth". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors.

C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "DotNetOpenAuth.OAuth.Consumer".
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "DotNetOpenAuth.OpenId". C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "DotNetOpenAuth.OpenId.RelyingParty". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "DotNetOpenAuth.Core".
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "EntityFramework.SqlServer". C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "Newtonsoft.Json". C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "System.Web.Http.OData". C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Found conflicts between different versions of the same dependent assembly.

Saturday, November 09, 2013

Messing With a Thief When My Blog Content was Stolen

While I was working on my last book, How to Do Everything: SharePoint 2013, the main McGraw-Hill editor on the project sent me a nicely worded email that basically asked if I was plagiarizing content for the book.

I was shocked to say the least, but when I read through the rest of the email, my shock quickly turned to burning rage. The reason I was being asked this question was because some moron was copying other people’s blog posts to his blog and passing them off as his own. I had used some of my own content in the draft of the book and the editor wasn’t sure who wrote it.

I sent the scoundrel an email giving him 48 hours to remove the post before I took any action. Of course, he probably didn’t realize that I could actually do anything if he chose to ignore me. I also contacted some other people to let them know that their content was being stolen by the same guy.

The funny part was that the thief was so lazy that he didn’t even bother copying the images, he just copied the source and therefore I had control over the images on his blog. A friend of mine had encountered the same thing and decided to mess with the culprit, so I figured I’d do the same. The first day after the warning, I gave him a chance to take down the post with no really embarrassing images on his blog. Remember that these are screenshots of his blog. I could do whatever I wanted with the images since they were on my server.

 BlogTheftDay1_edit
day 1 – tame and to the point

On day 2, I asked my friend for permission to use the image he posted when his content was stolen.

 BlogTheftDay2_edit
day 2 – you should have listened on day 1

After that, I went for a random theme.

 BlogTheftDay3_edit
day 3 – now his blog was actually worth reading

 BlogTheftDay4and5edit
day 4 and 5 – just some random stuff

Eventually, he took down the post, but he never apologized or replied to my message, but there’s a lesson for lazy thieves.

Wednesday, October 30, 2013

I Miss Compiled Server-side Code

Clearly, we should accept the things we cannot change, but sometimes it's fun to rant for the sake of ranting. This week I was working on a web application using AngularJS/AJAX and it reminded me that I really miss relying on server-side code. I know it's not in vogue, but I'll say it... I liked web development before JavaScript took over as such a heavy part of the code base. Writing this post, I do feel a bit like an old man talking about the "good 'ol days," but I also feel strongly that there must be developers out there suffering under the new fashion of client-side code. (And don't even get me started on running 'scripting' languages server-side. :)

I get the 'why'--I really do. I understand that the IT world is moving to the cloud and web applications heavily lean on script-based client-side frameworks to allow for a powerful user experience across platforms including mobile devices such as phones and tablets. (I also get that the V8 JavaScript VM used in Chrome doesn't include an interpreter--that's just splitting hairs.) Yes, I understand all that and it's great.

The problem is the dev experience. I've been using Sublime and I tried out the web tools in Visual Studio 2013 and they have made some progress to better support JavaScript and client-side frameworks, but the experience is just not as good as the good 'ol days of writing compiled server controls. Client-side code is more fragile because the compiler often won't help you find all sorts of errors. Writing "myvariable" instead of "myVariable" can break your whole application and you likely won't know it until you try to run that piece of code. When debugging issues, there are cases where it's necessary to pull out a tool such as Fiddler which allows you to manually inspect the communication between the client and server. I'm not knocking Fiddler, it's a fantastic asset, but seriously, manually reading JSON to figure out why it's malformed? (Not that I would ever do that.) We may as well be back in the 90s writing VBScript.

To be clear, I'm not some sort of code snob. Code is code and devs should be judged on code quality, not what the language they happen to be using. So I'm not saying that Java, C# or Go is in some way better than JavaScript; that's not my point at all. I'm simply saying that my experience writing code was more enjoyable when I could rely on some well refined tools to improve my productivity.

What's the solution to all this? We clearly need dev tools good enough that the difference between server-side and client-side code is irrelevant. In my mind, the various JavaScript frameworks have improved the situation, but there's lots of room for improvement. Client-side code (or server-side scripts such as Node.js) will be around for a long time and it should be treated as first class.