Tuesday, September 30, 2014

Leaving the Microsoft MVP Program

In a few days, my time as a Microsoft Most Valuable Professional (MVP) will come to a close. Sad, yes I know… OK, not really so much sad as the natural evolution of things. I first received the MVP award a mere four days before my daughter was born. Even back then, I knew that I would have to eventually bow out of the program. Of course, there are people in the program who successfully juggle their job, home life, and somehow still manage to be excellent MVPs, but I don’t feel I’m one of them, so I’ve asked not to be renewed this year.

The MVP program is excellent and I’ve enjoyed being a part of it. Most of all, I’ll miss going to the MVP Summit and hanging out with the other MVPs. There are other benefits of course—access to early information, free software—but those don’t match the amount of effort it takes to get the award. As I describe in the post mentioned below, you really have to doing community work because you love doing community work—otherwise the time investment just doesn’t make sense.

I already sent the obligatory “so long and thanks for all the fish” message to my fellow MVPs. I wish you all well and keep up the excellent work!

Microsoft MVP Banner geeklit cawood

BTW -- If you don’t know what the  Microsoft MVP award is, here’s some content from a post I wrote called How do I Become a Microsoft MVP?

First of all, if there is only one thing to remember about the MVP award, it’s this: the Microsoft Most Valuable Professional (MVP) award isn’t a certification. There are no set criteria or steps that someone can take to become an MVP. As the name implies, it’s an award.

“This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in SharePoint Services technical communities during the past year.”
-
Microsoft

In practical terms, this means helping with community-focused resources such as contributing to newsgroups, speaking at conferences, writing/blogging about your subject (e.g., SharePoint) and contributing code to CodePlex (an open source site).

Microsoft MVP Logo geeklit cawood

Wednesday, August 27, 2014

My ALS Ice Bucket Challenge Video

Here is my ALS Ice Bucket Challenge video. I was challenged by my good friend and advocate for ALS fundraising efforts, Rasool Rayani.

You can contribute online to the Walk for ALS (ALS society of BC).

"Also known as Lou Gehrig's disease, amyotrophic lateral sclerosis is a progressive neurodegenerative disease that affects nerve cells in the brain and the spinal cord. There is no cure and only one medicine to slow its progress has been approved."
http://www.cbc.ca/news/business/ice-bucket-challenge-brings-millions-of-dollars-to-battle-als-1.2739663

Thursday, July 10, 2014

MVPs and the Evolution of Windows

I’ve been mentioned on the Microsoft MVP Award Program blog. The post is about MVPs and the evolution of Windows.

From the article:

For more than two decades, MVPs have been on the forefront of helping people around the world make the most of their Microsoft technologies, including its centerpiece, the Windows operating system, from Windows 3.1 to Windows 8.1 and all the releases in between. Today, with Microsoft’s new rapid release cadence, their expertise is more important than ever…

In Canada, five MVPs produced guidance for the Windows XP End of Life campaign. Here are some highlights:

Brian Bourne | Windows XP End Of Support – Mitigating the Security Concerns
Yannick Plavonil | Chères entreprises, réveillez-vous Windows XP c’est fini dans un mois!
Sean Wallbridge | I Love My Surface Pro 2 and Windows 8.1
Colin Smith | XP End of Support – Do You Need New Hardware?
Stephen Cawood | Windows XP End of Life Is Coming

Turn off Image or Video Prompt in Snagit 12

If you haven’t tried Snagit, you should check it out. It’s much better than relying on the old school screen capture mechanisms. When I was taking screenshots for my books, Snagit saved me a great deal of time. As just one example, try capturing an open dropdown menu using just Windows screen capture shortcuts—it’s like trying to catch a greased pig.

I just upgraded to Snagit 12 and found that every time I take an image capture, I’m prompted to select either a video or image capture (see the big blue buttons in the screenshot below). I rarely use the video capture, so this is a waste of time. Here’s how to turn it off.

image

First, check if you’ve got an old version of Snagit installed. I had 11 and 12 installed and was not getting the behaviour I wanted. I had to uninstall Snagit 11 to get rid of the old editor.

Next, go to the preferences window and check the hotkey associated with the default capture profile. I like to use print screen, so I’m going to change the “Global capture” shortcut key to something else. To get to the preferences window, you can right-click the icon in the task menu (bottom right corner of your screen with the hidden icons), then choose Preferences and the Hotkeys tab.

image

After I’ve freed up the right shortcut key, I’m going to associate that key (PrtScn) with a profile that doesn’t ask if I want to take video.

To do this, choose the “Send to Clipboard” profile from the Manage Profiles dialog, and then click the Hotkey button at the bottom to associate this with the keyboard shortcut you’d like to use. Note that I wanted to go directly to the Snagit editor, so I have that enabled.

image

Now I’m back to the lightning fast capture that I’ve come to love. When I use the hotkey, I go straight to the Snagit editor. If I wanted, I could add styles (such as borders) as part of the profile and they would be automatically applied.

image

p.s., If you’re wondering how I captured the Snagit windows. I cheated. I used Alt+PrtScn to do it old school.

Wednesday, June 18, 2014

Aquatic Informatics Blog Post: AQUARIUS: Where Smart Water Data Meets the SMART Tunnel

Check out my first blog post on the Aquatic Informatics blog: AQUARIUS: Where Smart Water Data Meets the SMART Tunnel

In this post, I discuss the engineering feat that is the SMART (Stormwater Management and Road Tunnel) Tunnel. It stretches 9.7 km beneath Malaysia’s capital and function both as a roadway as a storm drainage system.

SMART Tunnel Aquatic Informatics Blog by Stephen Cawood

- Image Courtesy: ENTURA, Hydro Tasmania

From the post:

The “Stormwater Management And Road Tunnel” (SMART Tunnel), has been featured in many news stories and television shows such as Extreme Engineering on the Discovery Channel, Build It Bigger on the Science Channel, Megastructures on the National Geographic Channel, and Man Made Marvels on the Science Channel. The reason the Kuala Lumpur tunnel is so amazing starts with its length: it is the second longest storm drainage tunnel in Asia at 9.7 km (6.0 mi). The truly amazing aspect of the SMART tunnel is that it’s the first dual-use storm drainage and road structure in the world. The tunnel is mainly used to ease traffic congestion in Kuala Lumpur, but there is a storm drainage system under the two decks of cars. Essentially, the tunnel acts as a reservoir to decrease the peak of water flows of the Klang river and therefore mitigate the severity of floods. However, that’s not all. In the event of a bad flood, the road decks can also be used to drain the storm water away from the city. That’s right. The car levels can be completely filled with water as well!

Update: It was recently announced that a similar tunnel project is planned for Jakarta:

Jakarta Announces Plan for Integrated Tunnels to Manage Traffic and Floods

Thursday, May 29, 2014

Windows XP End of Life Update: Win a Microsoft Gift Card for Commenting on This Post

A while back, I wrote a post about the Windows XP End of Support date (yes, it has already passed). Microsoft has generously donated a Microsoft Store Gift Card to help get the message out that it's time to upgrade.

Leave a comment below and let me know what you think is the single most important reason to upgrade from Windows XP, or your favourite new feature of Windows 8.1 and you will be entered to a random draw for a $100 Microsoft Store Gift Card. Winners will be selected on July 15th.

Wednesday, April 09, 2014

Add Wikipedia App to Microsoft Word 2013

Office 2013 and SharePoint apps are great, but I have to admit that I haven’t done much with them yet. Today, I found myself wondering what I’d have to do to get Wikipedia to open when I right-clicked on a word or phrase in Word 2013.

image

To add Wikipedia, all I had to do was choose Insert from the Ribbon menu at the top of the screen and then click the Wikipedia icon. If you click “My Apps,” you can navigate to the Office 2013 app store and check out what other interesting apps are available.

image

Once you click the icon, you’ll need to confirm that you trust this new app.

image

After you’ve trusted the app, you can now quickly look up words or phrases from your Word documents in Wikipedia. Very cool!

image

Switching the app that opens by default when you choose the “Define” option is not obvious. To choose a different app, you first need to hide the Wikipedia app my clicking “My Apps” in the Ribbon and then clicking “Manage My Apps.”

image

For example, let’s say you wanted to use the Bing dictionary for the Define option. You can add the Bing app, then choose Manage My Apps—which opens a web page—and hide Wikipedia from that page. Once you hide an app, you should be able to click the “Refresh” button in the My Apps view and see that the app is no longer there.

After you’re done, you can unhide Wikipedia. Note that some people have reported that they had to restart Word after hiding Wikipedia.

I should mention that it’s also possible to have multiple apps running. In the screenshot below, I have both the Bing dictionary and the Wikipedia app open.

image

Saturday, March 22, 2014

U.N. World Water Day 2014

Today is U.N. World Water day!

My smartass Twitter comment was that I’ll spend the day reflecting on things, but seriously… water is our most precious resource and we really need to start taking care of it. Since joining Aquatic Informatics software, I’ve had much more exposure to stories of water-related issues around the world. We’re working to provide the software the world needs to make smart water policies based on actual data rather than speculation or lobbying. We need to let science decide how we can provide better access to clean drinking water and more widespread use of renewable resources for power generation.

From the U.N. Water Day Facebook page:

“Did YOU know that… by 2035, the global energy demand is projected to grow by more than one-third and demand for electricity is expected to grow by 70%.

22nd of March is World Water Day. The theme for this year is Water & Energy.
Share this picture and help us raise awareness about UN-Water World Water Day

UNWorldWaterDay

Wednesday, February 26, 2014

Windows XP End of Life is Coming

It's time to move on. Although Windows XP was an important release when it came out, and a surprising number of people and organizations are still running it, it's time to upgrade.



If you know someone still running XP, you should remind him or her in no uncertain terms that 2001 is over. On April 8th, 2014, Windows XP support will end. After that date, there will not be any updates--most notably security updates--for the O/S that should probably be called Windows Classic at this point.

One of my responsibilities at Aquatic Informatics is the supported systems matrix for our software. We had deprecated support for Windows XP a couple of releases ago, but as of our next release, we'll officially be dropping support. There's no doubt that most software companies have already done the same or will be dropping XP support in the near future. So boldly step into the present and take a look at Windows 8.1.

Update: Microsoft has generously donated a Microsoft Store Gift Card to help get the message out that it's time to upgrade.

Leave a comment on my update post and let me know what you think is the single most important reason to upgrade from Windows XP, or your favourite new feature of Windows 8.1 and you will be entered to a random draw for a $100 Microsoft Store Gift Card. Winners will be selected on July 15th.

Thursday, January 30, 2014

The Mystery of Why People Use SharePoint

I had to laugh when I read this blog post from the Calgary Herald: Is SharePoint a pain point? Maybe it’s time to ditch it. They used the image below (slightly modified here) to distance themselves from the "righteous" supporters of SharePoint.

image

Of course SharePoint isn't perfect, but if you require some features that SharePoint provides, then you should run a pilot and try it out; base your decision on your actual use cases. This quote from a white paper that I co-wrote with Colin Spence (he wrote this particular text) sums it up nicely:

“If end users already know how to use SharePoint and have received training on the key tools provided by SharePoint the organization can be more ambitious in the implementation. On the other hand if end users are completely unfamiliar with SharePoint and in general not open to change and not willing to take training, IT should carefully control the complexity of the SharePoint configuration. Projects can be seen as “failures” if a few important end users complain that the SharePoint solution is ‘too complicated,’ or ‘too time consuming.’”

It's a classic case of people believing that there is a perfect system out there and all they need to do is pick the right one. It's simply not reality.

Wednesday, December 18, 2013

Installing Ruby on Rails in Ubuntu 12.04 on Azure

In previous posts, I covered installing Ubuntu on Windows Azure, remotely accessing Linux from Windows Azure, installing LAMP and installing Git on Linux and a few other topics. The next subject I’ll be writing about is the Ruby on Rails web development platform.

There are some great resources out there for learning Ruby and Rails, and when the install goes smoothly, it’s very easy. I’m not an expert on Ruby, but the install didn’t go swimmingly for me, so I’m writing this post.

Step 1 – Install RVM and Ruby

\curl -L https://get.rvm.io | bash -s stable --ruby

Ruby Version Manager (RVM) is a great tool for working with Ruby. You can even run multiple versions of Ruby and easily switch back and forth.

image

Step 2 – Install Rails

Finally, you can use RubyGems to install rails:

gem install rails (may need sudo, I had to run it with both, see note below)

Note: if you see the error, ERROR:  Error installing rails: activesupport requires Ruby version >= 1.9.3.”, (or some other version) install Ruby 1.9.3 (Yes, even if you have a newer version installed) then use RVM to set the old version as the default using: rvm use ruby-1.9.3. This caused me much angst today. You can use ruby –v to see which version you’re running and rvm list to see the installed versions.

I also ran into this ugly error:

cawood@cawood:~$ rails --version
/usr/lib/ruby/1.9.1/rubygems/dependency.rb:247:in `to_specs': Could not find railties (>= 0) amongst [activesupport-4.0.2, atomic-1.1.14, bigdecimal-1.1.0, bundler-1.3.5, bundler-unload-1.0.2, executable-hooks-1.2.6, gem-wrappers-0.9.2, i18n-0.6.9, io-console-0.3, json-1.5.5, minitest-4.7.5, minitest-2.5.1, multi_json-1.8.2, rake-0.9.2.2, rdoc-3.9.5, rubygems-bundler-1.4.2, rvm-1.11.3.8, thread_safe-0.1.3, tzinfo-0.3.38] (Gem::LoadError)
        from /usr/lib/ruby/1.9.1/rubygems/dependency.rb:256:in `to_spec'
        from /usr/lib/ruby/1.9.1/rubygems.rb:1210:in `gem'
        from /usr/local/bin/rails:18:in `<main>'

I had to run gem install rails (with no sudo) to get all the gems to install properly. After I did that, I could see that Rails was installed properly:

cawood@cawood:~$ rails --version
Rails 4.0.2

Of course, my list of installed gems was much longer because the missing gems had been installed. It should look like this:

cawood@cawood:~$ gem list

*** LOCAL GEMS ***

actionmailer (4.0.2)
actionpack (4.0.2)
activemodel (4.0.2)
activerecord (4.0.2)
activerecord-deprecated_finders (1.0.3)
activesupport (4.0.2)
arel (4.0.1)
atomic (1.1.14)
bigdecimal (1.1.0)
builder (3.1.4)
bundler (1.3.5)
bundler-unload (1.0.2)
erubis (2.7.0)
executable-hooks (1.2.6)
gem-wrappers (0.9.2)
hike (1.2.3)
i18n (0.6.9)
io-console (0.3)
json (1.5.5)
mail (2.5.4)
mime-types (1.25.1)
minitest (4.7.5, 2.5.1)
multi_json (1.8.2)
polyglot (0.3.3)
rack (1.5.2)
rack-test (0.6.2)
rails (4.0.2)
railties (4.0.2)
rake (0.9.2.2)
rdoc (3.9.5)
rubygems-bundler (1.4.2)
rvm (1.11.3.8)
sprockets (2.10.1)
sprockets-rails (2.0.1)
thor (0.18.1)
thread_safe (0.1.3)
tilt (1.4.1)
treetop (1.4.15)
tzinfo (0.3.38)

Alternate Method—This didn’t work for me on Ubuntu 12.04

Step 1 – Install Ruby

You can either install the standard version: sudo apt-get install ruby-full build-essential
or the minimal requirements with: sudo aptitude install ruby build-essential libopenssl-ruby ruby1.8-dev (note the version number – you’ll have to update that).

I’m going with the first option to install ruby-full.

image

Step 2 – Install the Ruby Version Manager

CAUTION: Normally you would just run: sudo apt-get install ruby-rvm

However, there is an issue with the Ubuntu RVM package, so you should run this instead: \curl -L https://get.rvm.io | bash -s stable --ruby --autolibs=enable --auto-dotfiles

If you do run into issues with RVM. For example, using rvm use doesn’t change to the version you want, you can run these commands to clean your system (see this thread).

sudo apt-get --purge remove ruby-rvm
sudo rm -rf /usr/share/ruby-rvm /etc/rvmrc /etc/profile.d/rvm.sh


Then, open new terminal and validate environment is clean from old RVM settings (should be no output):



env | grep rvm


Step 3 – Check that You’re Using the Latest Version of Ruby



ruby –v will show you which version you’re using.



If you’re not using the one you want, you can use RVM to upgrade. This will download the source that is then used by RVM in the next step to compile and install Ruby; it is not a quick command.



sudo rvm install ruby-1.9.3-p125



or



sudo rvm install ruby-1.9.3 (Note version number)



However, if you do the second option, you may need to workaround an issue before updating. If you try to install ruby-1.9.3, you might get the error: “ERROR: The requested url does not exist: 'ftp://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-.tar.bz2' from RVM. You can workaround this by downloading the package yourself.”



sudo curl -o /usr/share/ruby-rvm/archives/ruby-1.9.3-.tar.bz2 \http://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p0.tar.bz2



Check that you’re using the latest Ruby: ruby -v

If not, switch to the latest using RVM: rvm use 1.9.3



Step 4 – Install Rails



Finally, you can use RubyGems to install rails:



sudo gem install rails



Step 5 – Install a Web Server



I use Apache and MySQL, so the LAMP install works for me, but there are other options such as WEBrick or Lighttpd.






Other posts on this topic:


RubyOnRails.org getting started


Ubuntu documentation – Ruby on Rails


Thursday, December 05, 2013

Using Wyzz Web-based HTML editing control in ASP.NET MVC

I recently had to put out a web-based single page application (SPA) on short notice. To make that happen, I knew I had to use some open-source controls. One was the jsTree treeview control (which I wrote about on this blog - Using jsTree with ASP.NET MVC) and another was the Wyzz WYSIWYG web-based editing control for HTML.
From their site, “Wyzz is an ultra-small, very light WYSIWYG (What You See Is What You Get) Editor for use in your web applications. It's written in JavaScript, and is free (as in speech and as in beer) for you to use in your web applications and/or alter to your needs (see the license conditions).
image
Naturally, the first step to add a reference to the wyzz.js script file. Once you have that, you just need to add the control to an HTML <textarea> element. Finally, it’s a simple matter of adding some JavaScript to “make_wyzz” the control.
<script language="JavaScript" type="text/javascript" src="~/Home/wyzz.js"></script>



<textarea name="textEditor" id="textEditor" rows="10" cols="40">No file loaded...</textarea><br />
<script language="javascript1.2">
    make_wyzz('textEditor');
</script> <div ng-controller="EditorCtrl"> <form novalidate class="simple-form"> <button ng-click="saveFileContent()">save</button> </form> </div>

As you can see in the example above, I’ve chosen to use an AngularJS control to define the behaviour of the save button. In the JavaScript I define a server-side controller function (ASP.NET in this case) and I send it the content of the control by accessing the HTML element that the control is using.

$scope.saveFileContent = function () { 
        $http.post('/Home/SaveFileContent', { filePath: document.getElementById("multilingualfile").innerHTML, content: document.getElementById("wysiwyg" + "textEditor").contentWindow.document.body.innerHTML, title: document.getElementById("titleHtml").value })
            .then(
            function (response) {
                alert("File Save Result: " + response.data.Result);
            },
            function (data) {
                alert("Error saving file content");
            }
        );
    }

Update: Here’s the basic format of the server-side part:


[HttpPost]
public ActionResult SaveFileContent(string filePath, string content, string title)
{
    try
    {
        ...
        
        return Json
            (
                new
                {
                    Result = "Success",
                }
            );
    }
    catch (Exception ex)
    {
       ...

        return Json
            (
                new
                {
                    Result = "Error saving content: " + ex.ToString(),
                }
            );
    }
}

To customize your Wyzz controls, you can edit the wyzz.js file. If you have any issues, refer to the Wyzz discussion forum.

Sunday, December 01, 2013

Using jsTree with ASP.NET MVC

When I wanted to use a pure JavaScript treeview control for a recent ASP.NET MVC5 project, I looked around and found jsTree; it’s a popular and rich solution, so I decided to try it. I ran into a few customization hurdles, so here are my lessons learned.

Note that this is for jsTree 1.0; at the time of writing, 3.0 has not been released.

Step 1: The HTML in the view. Pretty simple…

<div id="FileTree"></div>


Step 2: Loading the tree dynamically from the MVC controller using jQuery.

<script type="text/javascript"> 
// Begin JSTree: courtesy Ivan Bozhanov: http://www.jstree.com:

$('#FileTree').jstree({
"json_data": {
"ajax": {
"url": "/Home/GetTreeData",
"type": "POST",
"dataType": "json",
"contentType": "application/json charset=utf-8"
}
},
"themes": {
"theme": "default",
"dots": false,
"icons": true,
"url": "/jstree/themes/default/style.css"
},

"contextmenu": {
"items": {
"create": false,
"rename": false,
"remove": false,
"ccp": false,
}
},

"plugins": ["themes", "json_data", "dnd", "contextmenu", "ui", "crrm"]
})

</script>


Step 3: Server-side code to populate the tree. This code is based on desalbres’s Simple FileManager with jsTree. (The model code is below.)

// Begin JSTree (Controller code courtesy desalbres: http://www.codeproject.com/Articles/176166/Simple-FileManager-width-MVC-3-and-jsTree)
[HttpPost]
public ActionResult GetTreeData()
{
if (AlreadyPopulated == false)
{
JsTreeModel rootNode = new JsTreeModel();
rootNode.attr = new JsTreeAttribute();
rootNode.data = "Root";
string rootPath = Request.MapPath(dataPath);
rootNode.attr.id = rootPath;
PopulateTree(rootPath, rootNode);
AlreadyPopulated = true;
return Json(rootNode);
}
else
{
return null;
}
}

/// <summary>
/// Populate a TreeView with directories, subdirectories, and files
/// </summary>
/// <param name="dir">The path of the directory</param>
/// <param name="node">The "master" node, to populate</param>
public void PopulateTree(string dir, JsTreeModel node)
{
if (node.children == null)
{
node.children = new List<JsTreeModel>();
}
// get the information of the directory
DirectoryInfo directory = new DirectoryInfo(dir);
// loop through each subdirectory
foreach (DirectoryInfo d in directory.GetDirectories())
{
// create a new node
JsTreeModel t = new JsTreeModel();
t.attr = new JsTreeAttribute();
t.attr.id = d.FullName;
t.data = d.Name.ToString();
// populate the new node recursively
PopulateTree(d.FullName, t);
node.children.Add(t); // add the node to the "master" node
}
// loop through each file in the directory, and add these as nodes
foreach (FileInfo f in directory.GetFiles("*.htm"))
{
// create a new node
JsTreeModel t = new JsTreeModel();
t.attr = new JsTreeAttribute();
t.attr.id = f.FullName;
t.data = f.Name.ToString();
// add it to the "master"
node.children.Add(t);
}
}

// Don't load the jsTree treeview again if it has already been populated.
// Note: this causes a bug where the tree won't repaint on browser refresh
public bool AlreadyPopulated
{
get
{
return (Session["AlreadyPopulated"] == null ? false : (bool)Session["AlreadyPopulated"]);
}
set
{
Session["AlreadyPopulated"] = (bool)value;
}

}
// End JSTree

First I had to resolve the issue that a browser refresh would repaint the whole treeview. It’s possible that I simply missed this when I cherry picked code from the FileManager codeproject example.


public ActionResult Test(string returnUrl)
{
ViewBag.ReturnUrl = returnUrl;
Session["AlreadyPopulated"] = false;
return View();
}

Next, I had to customize jsTree the way I wanted it to behave. Getting the tree to start collapsed (closed) instead of expanded (open) was the first order of business. The jsTree API took care of the problem.


$('#FileTree').bind("loaded.jstree", function (event, data) { 
$(this).jstree("close_all");
})


Next, I wanted the leaf nodes to use a different background image than the folder nodes. This required changing the server-side code to actually write the leaves (files) as leaf nodes and then add the right CSS to style the jstree-leaf class.



namespace FileEditor.Models 
{
public class JsTreeModel
{
public string data;
public JsTreeAttribute attr;
// this was "open" but changing it to “leaf” adds “jstree-leaf” to the class
public string state = "leaf";
public List<JsTreeModel> children;
}

public class JsTreeAttribute
{
public string id;
}
}


And then styling the leaf nodes with a different background image than the folders.



<style type="text/css"> 
#FileTree .jstree-leaf > a > ins {
background: url("/jstree/themes/default/d.gif");
background-position: -2px -19px !important;
}
</style>


Finally, I wanted to disable the right-click context menu options since I’m not using them. (This code appears in the code above.)

"contextmenu": {
"items": {
"create": false,
"rename": false,
"remove": false,
"ccp": false,
}
},

That’s it. jsTree is not working the way I want. I expect that version 3 will be great when it is released.

Other posts on this topic:
jsTree – Few examples with ASP.Net/C#
Simple FileManager width MVC 3 and jsTree

Wednesday, November 27, 2013

Continuous Deployment to Azure Web Sites via Visual Studio 2013 and VS Online

In principle, I appreciate the value of test-driven development (TDD) and continuous integration (CI) builds; I just didn’t think they would be entirely practical for my solo, side projects. Well, I don’t have any excuses anymore, I’ve been working on a quick project and in just a few hours, I had an AngularJS single page application (SPA) remotely building and deploying to an Azure web site. All for free. Every time I check-in a change, the source is copied to the cloud, my test runs and it triggers a CI build remotely and then is deployed automatically to the host server. Pretty slick.

image

One change was that testing web applications has come along from the dark days of having to wrangle with the HTTP context. Maybe the situation isn’t totally resolved, but it’s easier these days to write tests because the HTTP context isn’t so much of a stumbling block in newer frameworks.

I only ran into two issues setting everything up, and one was pretty much my fault. First, I created an ASP.NET MVC project using Visual Studio 2013 and added it to Visual Studio Online (which supports both Git and Team Foundation Services [TFS] online). As I wrote in a previous post, you can get free private Git repositories via Visual Studio Online.  I also created an Azure website and linked it to the project. I created a CI build profile and the build triggered correctly on check-in. No problems, no errors. However, the project was not being deployed. The issue was simple, I chose the wrong deployment template. I should have been using the AzureContinuousDeployment.11.xaml template.

The other issue was that the Visual Studio Online build machine wasn’t in sync with the NuGet updates, I had installed on my local box. To resolve these issues with package management, I Enabled NuGet Package Restore feature and then check-in the entire “packages” directory to resolve missing DLL errors.

To get this working for your project, you can start by reading the MSDN article about a continuous deployment setup.

BTW - If you don’t check in the package files, you’ll get error such as these (and more) because the remote build machine won’t be able to find the DLLs. To help with search, I'll paste in a bunch of the errors here.

Models\AccountModels.cs (11): The type or namespace name 'DbContext' could not be found (are you missing a using directive or an assembly reference?)App_Start\WebApiConfig.cs (5): The type or namespace name 'Newtonsoft' could not be found Controllers\AccountController.cs (7): The type or namespace name 'DotNetOpenAuth' could not be found Controllers\TodoController.cs (3): The type or namespace name 'Infrastructure' does not exist in the namespace 'System.Data.Entity' Controllers\TodoListController.cs (4): The type or namespace nam 'Infrastructure' does not exist in the namespace 'System.Data.Entity' Filters\InitializeSimpleMembershipAttribute.cs (3): The type or namespace name 'Infrastructure' does not exist in the namespace 'System.Data.Entity' Areas\HelpPage\SampleGeneration\HelpPageSampleGenerator.cs (14): The type or namespace name 'Newtonsoft' could not be found (are you missing a using directive or an assembly reference?) Models\TodoListDto.cs (6): The type or namespace name 'Newtonsof' could not be found Models\TodoList.cs (6): The type or namespace name 'Newtonsoft' could not be found Models\TodoItemContext.cs (16): The type or namespace name 'DbContext' could not be found Models\AccountModels.cs (18): The type or namespace name 'DbSet' could not be found Models\TodoItemContext.cs (23): The type or namespace name 'DbSet' could not be found C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "EntityFramework". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors.C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "DotNetOpenAuth.OAuth". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors.

C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "DotNetOpenAuth.OAuth.Consumer".
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "DotNetOpenAuth.OpenId". C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "DotNetOpenAuth.OpenId.RelyingParty". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "DotNetOpenAuth.Core".
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "EntityFramework.SqlServer". C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "Newtonsoft.Json". C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Could not resolve this reference. Could not locate the assembly "System.Web.Http.OData". C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets (1605): Found conflicts between different versions of the same dependent assembly.

Saturday, November 09, 2013

Messing With a Thief When My Blog Content was Stolen

While I was working on my last book, How to Do Everything: SharePoint 2013, the main McGraw-Hill editor on the project sent me a nicely worded email that basically asked if I was plagiarizing content for the book.

I was shocked to say the least, but when I read through the rest of the email, my shock quickly turned to burning rage. The reason I was being asked this question was because some moron was copying other people’s blog posts to his blog and passing them off as his own. I had used some of my own content in the draft of the book and the editor wasn’t sure who wrote it.

I sent the scoundrel an email giving him 48 hours to remove the post before I took any action. Of course, he probably didn’t realize that I could actually do anything if he chose to ignore me. I also contacted some other people to let them know that their content was being stolen by the same guy.

The funny part was that the thief was so lazy that he didn’t even bother copying the images, he just copied the source and therefore I had control over the images on his blog. A friend of mine had encountered the same thing and decided to mess with the culprit, so I figured I’d do the same. The first day after the warning, I gave him a chance to take down the post with no really embarrassing images on his blog. Remember that these are screenshots of his blog. I could do whatever I wanted with the images since they were on my server.

 BlogTheftDay1_edit
day 1 – tame and to the point

On day 2, I asked my friend for permission to use the image he posted when his content was stolen.

 BlogTheftDay2_edit
day 2 – you should have listened on day 1

After that, I went for a random theme.

 BlogTheftDay3_edit
day 3 – now his blog was actually worth reading

 BlogTheftDay4and5edit
day 4 and 5 – just some random stuff

Eventually, he took down the post, but he never apologized or replied to my message, but there’s a lesson for lazy thieves.

Wednesday, October 30, 2013

I Miss Compiled Server-side Code

Clearly, we should accept the things we cannot change, but sometimes it's fun to rant for the sake of ranting. This week I was working on a web application using AngularJS/AJAX and it reminded me that I really miss relying on server-side code. I know it's not in vogue, but I'll say it... I liked web development before JavaScript took over as such a heavy part of the code base. Writing this post, I do feel a bit like an old man talking about the "good 'ol days," but I also feel strongly that there must be developers out there suffering under the new fashion of client-side code. (And don't even get me started on running 'scripting' languages server-side. :)

I get the 'why'--I really do. I understand that the IT world is moving to the cloud and web applications heavily lean on script-based client-side frameworks to allow for a powerful user experience across platforms including mobile devices such as phones and tablets. (I also get that the V8 JavaScript VM used in Chrome doesn't include an interpreter--that's just splitting hairs.) Yes, I understand all that and it's great.

The problem is the dev experience. I've been using Sublime and I tried out the web tools in Visual Studio 2013 and they have made some progress to better support JavaScript and client-side frameworks, but the experience is just not as good as the good 'ol days of writing compiled server controls. Client-side code is more fragile because the compiler often won't help you find all sorts of errors. Writing "myvariable" instead of "myVariable" can break your whole application and you likely won't know it until you try to run that piece of code. When debugging issues, there are cases where it's necessary to pull out a tool such as Fiddler which allows you to manually inspect the communication between the client and server. I'm not knocking Fiddler, it's a fantastic asset, but seriously, manually reading JSON to figure out why it's malformed? (Not that I would ever do that.) We may as well be back in the 90s writing VBScript.

To be clear, I'm not some sort of code snob. Code is code and devs should be judged on code quality, not what the language they happen to be using. So I'm not saying that Java, C# or Go is in some way better than JavaScript; that's not my point at all. I'm simply saying that my experience writing code was more enjoyable when I could rely on some well refined tools to improve my productivity.

What's the solution to all this? We clearly need dev tools good enough that the difference between server-side and client-side code is irrelevant. In my mind, the various JavaScript frameworks have improved the situation, but there's lots of room for improvement. Client-side code (or server-side scripts such as Node.js) will be around for a long time and it should be treated as first class.

Monday, September 23, 2013

Changing Site Access Request Email in SharePoint 2013 (Office 365)

The option to set the email address for any SharePoint site access requests has moved around in the last few versions, so I thought I’d post this for those searching through old posts looking for one about SharePoint 2013.

This setting determines who will receive an email when a user requests access to a particular site—usually when the user tries to access the site and is denied. The tricky part is that the email address for this request is not related to the site owner permissions, it’s just a string.

To find the setting, navigate to:

Site Settings (Gear icon on top-right) > Site permissions > Access Request Settings (in the ribbon)

First use the gear icon in the top-right corner to get to the Site Settings page. If you don’t see the Site Settings link, you probably don’t have sufficient rights to make this change.

image

Once there, click on Site permissions.

SNAGHTMLf46205b

This will open the “Permissions: <site name>” page where you can access the Access Request Settings option from the ribbon at the top of the screen. The option you’re changing is “Send all access requests to the following e-mail address.”

SNAGHTMLf4595d8

Simply enter the email address you’d like to use for access requests and you’re done.

Wednesday, September 18, 2013

Convert Broken HTML to XHTML

I recently made the decision to refactor a 600 page software manual. That’s a daunting task, so why did I do it? The old format was barely working, inflexible, required a truly awful propriety tool, and cost the company considerable time and money when changes (such as translations) were required.
The underlying pages were in HTML, or at least the closest thing to HTML that still actually worked. In reality, the code was awful; there were broken tags and redundant tags all over the place. The editor in question (developed by a small company in Hawaii) is nothing more than a wrapper around Microsoft’s free HTML Help Workshop tool. I decided to clean up the HTML (read: convert it to XHTML), dump the editor and dynamically build the manual the same way I’ve done at companies in the past. This is an ongoing project, but here’s how I handled the task of cleaning up ~600 HTML files, so they were in valid XHTML.
Resources:
HTML - Special Entity Codes
HTML Tidy
Online RegExr Test Tool and Interactive Tutorial
Sublime Text Editor
image
- running a Regular Expression replace in the Sublime Text editor
Step 1: Clean up the HTML with HTML Tidy
HTML Tidy is convenient way to repair poor HTML. It doesn’t fix everything, but it does help and it makes the code look a lot better since it will fix much of indentation. So the first thing I did was run this HTML Tidy command on all the files.
I ran this in Git Bash after turning off word wrap in tidy settings file. Even with the word wrap option, HTML Tidy inserted more newlines than you’d expect, so it isn’t perfect, but it made a big difference.
$ find /C/Manual -type f -name "*.htm" -exec tidy -f errors.txt -m -utf8 -i {} \;
Note that you can remove the HTML Tidy watermark pretty easily using find/replace in Sublime. And that is a nice segue to the next step.
Step 2: Simple Find/Replace in Sublime
Using the “Find in Files…” feature, it’s easy to make simple text substitutions in Sublime. For example, to be XHTML compliant, I need to convert &nbsp; to &#160;, <BR> to <br/>, and many other examples.
I also needed to simply remove some tags. For example, tags added when someone pasted text from Microsoft Word into the editor (e.g., <o:p> and </o:p>).
Sublime will help you figure out the syntax to match just the current open file, all open files, or a whole directory structure. For example, in the “Where” box for Replace, you might enter c:\directory\test,*.htm to match all .htm files.
Step3: RegEx Find/Replace in Sublime
Simple find/replace actions got me part way there, but they wouldn’t solve all the issues I had to deal with in the broken HTML. The next step was to use Regular Expressions to enable some more sophisticated corrections.
One example, was attributes within HTML tags (such as size, height, etc.) that weren’t enclosed in quotation marks. Browsers will deal with that transgression, but it’s not valid XHTML. I had to find a quick way to add the quotes around these attributes in ~600 files. The answer was find/replace using regular expressions in Sublime.
Find: (size=)([0-9])
This creates two capturing groups with “size=” as the first and any number of 0-9 characters as the second.
Replace: $1"$2"
This replace command encloses the second capturing group in quotation marks. For example, size=100 becomes size=”100”.
Well that’s all for now. I hope you found this helpful. I encourage you to try the RegExr online tool; it’s helpful when refining regular expressions.

Friday, September 13, 2013

Scripting Regular Expressions Find/Replace in Sublime Text

Just like everyone in the software industry, I’ve used many, many different text editors over the years.  vi, Emacs, Notepad, Notepad++, TextPad, nano, gedit… to name a few. But, of course, there is a ridiculously long list of text editors and I haven’t tried that many of them. The one that I’ve been working in for most of this year is Sublime Text. Sublime’s tag line is “Sublime Text: The text editor you’ll fall in love with.” I may not be in love yet, but I’m definitely checking it out a lot.

In addition to having excellent functionality built-in, Sublime also supports a plug-in model. There is the GoSublime plug-in for golang code and there’s the RegReplace plug-in that allows one to write find/replace commands using regular expressions and save them in script as Command Palette commands.

To install RegReplace on Windows:

Step 1: Install Package Control for Sublime Text

As you can see in the installation instructions for Package Control  for Sublime Text, all you have to do is open a Sublime Text console (Ctrl+`) and paste in the install code.

Step 2: Install RegReplace for Sublime Text

image

Option 1: After installing package manager, download RegReplace, unzip it and paste the unzipped folder into your Packages folder (e.g., C:\…\Sublime Text 3\Packages). Next, go to Preferences > Package Control and choose “Package Control: Install Package.” Then choose the package you want to install from the list.

Option 2: This requires you have Git. Open a command prompt. CD to your Sublime Text 3 packages directory, then enter the following command:

git clone -b ST3 https://github.com/facelessuser/RegReplace.git RegReplace

Step 3: Add a Default.sublime-commands file -- with your command added

I had to manually create the folder: C:\Users\username\AppData\Roaming\Sublime Text 3\Packages\RegReplace

Once you have the file, you can add your own custom commands. (Remember to add a comma to the command above.)

   // Test RegReplace
     {
         "caption": "Reg Replace: Test",
         "command": "reg_replace",
         "args": {"replacements": ["test_reg_replace"]}
     },

Step 4: Add your custom command to the Command Palette

To do this, choose Tools > Command Palette > Then type “setting” and you’ll see “Preferences: Reg Replace Settings – User.” Choose this option and paste the example one that comes with RegReplace into the new file that’s created.

This will create the file: /C/Users/username/AppData/Roaming/Sublime Text 3/Packages/RegReplace/reg_replace.sublime-settings

After the file is created, add a new command to reg_replace.sublime.settings. For example:

// Test the RegReplace Sublime Plugin
    "test_reg_replace": {
    "find" : "testxxxxx",
    "replace": "it works!"
   }

image

Step 5: Try it out!

Once you have everything set up, you can use the Command Palette to run your new custom find/replace regular expressions scripts.

image

Note: After installing RegReplace, I received the an error when trying to add my test code to reg_replace.sublime.settings. The error was something like “Cannot save. Can’t create .tmp file in RegReplace folder.” To work around the issue, I opened the security settings for the folder and added write permissions for all users. A bit overkill, but in my case, it’s a secure machine.

Thursday, August 22, 2013

GoREST (golang web services) Simple Examples

In my last post, Installing GoREST on Ubuntu Linux, I mentioned that I should really post a simple example of using GoREST. Here’s some example code for a Get and Post request using GoREST web services in golang.

Note that I’m also using the go-sql-driver/mysql driver for MySQL in golang. To install the driver, simply run:

$ go get github.com/go-sql-driver/mysql

Also, this example isn’t actually using JSON. Check out the link at the end of this post for a JSON example.

package main
import (
      "code.google.com/p/gorest"
    _ "github.com/go-sql-driver/mysql"
      "database/sql"
      "net/http"
      "log"
      "time"
      "os"
      "strconv"
)

func main() {
    // GoREST usage: http://localhost:8181/tutorial/hello
    gorest.RegisterService(new(Tutorial)) //Register our service
    http.Handle("/",gorest.Handle())    
    http.ListenAndServe(":8181",nil)
}

//Service Definition
type Tutorial struct {
    gorest.RestService `root:"/tutorial/" consumes:"application/json" produces:"application/json"`
    hello  gorest.EndPoint `method:"GET" path:"/hello/" output:"string"`
    insert   gorest.EndPoint `method:"POST" path:"/insert/" postdata:"int"`
}

func(serv Tutorial) Hello() string{
    return "Hello World"
}

func(serv Tutorial) Insert(number int) {
    db, err := sql.Open("mysql", "root:password@/dbname?charset=utf8")
    db.Exec("INSERT INTO table (number) VALUES(strconv.Itoa(number) + ");")
    db.Close()
    serv.ResponseBuilder().SetResponseCode(200)
}

I’m using Postman REST Client for my post tests, you can download Postman for free from the Chrome web store. (Blog post on using Postman with JSON.)

image