In the last couple of weeks I’ve been reading two books that I hoped would help me with my role as a practice lead.
One of them is “Debugging Teams”by Brian Fitzpatrick and Ben Collins-Sussman published by O’Reilly. It is a small book of around 150 pages, filled with practical advice on how to make successful teams that work and collaborate together. When I started reading it, it reminded me of a book I read a year back called “Team Geek”. Of course, I missed the clue that “Debugging Teams” is a rewrite of “Team Geek”, expanded to include feedback from the first book.
The main advice of Debugging Teams is a simple idea of practicing HRT (Humility, Respect and Trust). It permeates the whole book as a effective acronym to remember during your team leadership work. The rest of practical advices in the book include how to manage conflict, how to make a strong team ethos, how to navigate the organizational hierarchies and so on.
Debugging Teams, in essence, is a great update to an already great book. I’d rate it with 5 stars, wholeheartedly.
The second book is even smaller, but equally useful. It is called “Exercises for Programmers” by Brian P. Hogan, published under the umbrella of Pragmatic Programmers brand. With the subtitle of “57 challenges to develop your coding skills”, it is an exercise book that begins with “Hello, world” challenges and ends with complete small projects such as to-do lists or URL shorteners.
I use it now to send bi-weekly code challenges to my team. We then sit together to do a joint code review of each individual solution, in order to learn from it how to improve the code legibility and mantainability. It is equally suitable to use it as a source for code katas, test-driven development (TDD) assignments or self-study challenges when learning a new programming language.
My review of the “Exercises for Programmers” is 4 out of 5, just because many programs are very simplistic and also because several examples in it are maybe too US-centric (use of imperial units and USA-specific jargon) for a universally applicable book. Having said that, it is also a must-have if you want to challenge your (or your team’s) programming skills.
Last week I was in Stockholm for the annual European SharePoint Conference 2015. A little bit tired after being in USA for MVP Summit the previous week, but happy to meet my dear SharePointers and get to know some new ones.
My talk was about “Extending Authentication and Authorization”. I talked about claims, the underpinning of all thinks AuthN and AuthZ in SharePoint 2013. My demo was a custom claims provider that exposed dummy claims in People Picker that were used to protect confidential document from normal users.
I also demoed the federated authentication with SharePoint and ADFS.
In my opinion, the majority of the development tutorials that show you how to make a web application do just that: they show you how to make a demo application. This demo application isn’t supposed to be production-ready nor should support high user loads. But, what happens when you need a scalable application? This is the missing piece I thought I could provide with my course.
The course takes a simple web application named Ticketer, a simple-but-complete event and ticketing MVC 5 application, and refactors it into a scalable, redundant version of itself using a variety of techniques such as storage locking removal, caching, asynchronous calls or non-relational data storage.
In the following clip you can see the load test of the application before and after the scalability improvements.
Questions? Leave a comment and I’ll do my best to answer it.
My session was about connecting IoT to Office 365 (via Azure). I used an Intel Galileo prototyping board with a Passive Infrared sensor (PIR). The sensor data was used to determine if a meeting room was empty or occupied. The raw data is uploaded by Galileo to an Azure Notification Hub. A continuously-running Stream Analytics job is then used to translate the raw data to 1-minute resolution of the room availability and to insert this data into Azure Table Storage. Finally, a provider-hosted Office 365 SharePoint application is used to visualize the room availability.
I have been unusually silent on Twitter and on this blog in the last few weeks. The reason is that I have left Spenta / Beezy, after two wonderful and exciting years, and I have joined Sogeti Spain as Senior SharePoint Architect. I have nothing but words of gratitude for my fellow Spentians, and it is always a pain to part ways with so awesome a team.
My new work role in Sogeti Spain will be to keep building top-notch SharePoint solutions, but additionally I will be acting as a SharePoint Team Lead. I have already started making some changes in the development practice with the ultimate aim of creating a culture of excellence in SharePoint solution development.
I have been thinking lately of the evolution I witnessed in the corporate intranets working on many SharePoint projects. In this post I’m going to summarize my thoughts on that evolution. This is also going to be a technology-free post.
The main difference between intranet and public web site is that intranets are not anonymous. The user browsing the intranet has a name and a username (and hopefully more than that) and this information can be used in many ways.
The “Classic” Intranet
The “classic” intranet in the prehistory of SharePoint was fairly uniform in the content it presented. Little or no information was personalized for the user that was viewing the page.
The content of the intranet was “pushed” to the intranet from centralized locations such as the News Center or the Announcements of different sites. Usually, there were few people actively adding content and all the rest of the intranet users were passive consumers of that information.
So much for the “dynamic” content that was being added. However, the most of the content in the intranet was static or nearly static: telephone listings, project and department descriptions and so on. That content rarely (if ever) changed.
In consequence, the Intranet home page reflected that approach. The “push” model distributed news that were prominently displayed. There was also a myriad of shortcuts, links and navigation contraptions that allowed the user to further explore the intranet.
Naturally, this led to the unengaged intranet users. They just had no need to visit the intranet every now and then. Only a casual visit every now and then or a fact-finding necessity would cause the users to open the intranet in their browsers.
Some companies would leverage the user information to filter the information they see on their home page, such as showing only the information that is relevant to their department. However, the most of the organization didn’t have these problems as their volume of new information was low and filtering then served no purpose.
The “Social” Intranet
In the last 4-5 years the social computing technologies have made their appearance in the corporate world, after having taken the private user space by assault. The immediate nature of social updates and the viral-like features of popular content were seen as the cure for the unengaging, static intranets of the past.
The news section was being replaced or prominently complemented by a “wall“, “feed” or “conversation”. It’s dynamic nature ensured always-fresh content in the intranet. However, it also opened the way for information overload. From being starved to information death by old intranets to being choked by the sheer overload of information that is generated every day…in just a few years.
Social computing also features a network, where every users has connections to other users. It may be an explicit connection such as user follow action or an implicit connection such as having the same department. These connections are then used to show the information generated from the users the visitor is connected to. You could see documents and content created by the people you are connected to and hopefully this “social” filter capability would reduce the information overload to more personal level.
This filtering by user characteristics such as connections, context and behavior is what is being called a “pull” model, where the information is pulled for the current user out of the vast information overload.
To Push or To Pull?
In the light of the rising popularity of the social intranets, we may think that the “pull” model is superior to the “push” model. There is some truth in this, but in my opinion the answer isn’t just that simple.
I think that the key of the intranet success is the information context. This context is the thing that separates the raw data from useful bit of information.
The “push” model makes the context static and uniform to all users. The “pull” model makes the context unique to the user. And the answer lies in a wise mix of both push and pull models.
All the content in the intranet isn’t the same. There is a need for global information (such as IT services outage) that could benefit from the push model. All the rest of the information is more or less contextualized. So, the “news pushed to every user on their home page” are clear candidates to be ditched in favor of the pull model.
The pull model makes the context social and user-centric. This is true for many situations: the content I have been interacting with, the content created by the users I have interacted with, the content about the topics I find interesting and similar derived situations. However, there is no single recipe: the fact that I follow a user doesn’t mean that I am interested in all the thing he or she creates and shares.
Here is a feature that is missing from many “social” intranets: the curated content. The act of content curation is the act of providing context to the information. We need users to curate, collect, collate and organize the content that is relevant in a specific context and then make this context easily findable. Wiki pages are perfect containers for curated content, for example.
The art of good intranet design is the art of wisely combining the three models: pushed, pulled and curated content to provide the best experience for the intranet users. There is sadly no unique recipe to share here, that’s why I call it an art.
It is also what makes intranet information architecture projects so exciting!
Yesterday Microsoft announced the availability of Azure App Services, a new high-level grouping of services for building apps on Azure cloud platform. According to the announcement blog post:
App Service is a new, one-of-a kind cloud service that enables developers to build web and mobile apps for any platform and any device. App Service is an integrated solution that streamlines development while enabling easy integration with on-premises and SaaS systems while providing the ability to quickly automate business processes.
I immediately saw “On-Prem SharePoint Server” in the list of the available connectors for Logic Apps and API Apps.
Also, SharePoint is visible in the API Apps catalog in Azure, too.
It has made me think that a SharePoint 2016 could, in theory, use the new Azure App infrastructure to run workflows (now called Logic Apps, similar to BizTalk orchestrations) that span multiple services: SharePoint, Exchange, public and private social networks, data stores and so on. The logic of the workflow would be based in Azure and it would consume the other services through the connectors. The authentication clould be brokered by the Azure AD.
I like the idea. Only the Ignite will let us know how much of the idea holds true.
My last adventure with Lightswitch was trying to detect when a popup window inside a screen is closed.
You can close a Lightswitch popup window by clicking or tapping outside the popup. The jQuery Mobile popup widget that Lightswitch uses then closes the popup. However, I wanted to intercept that event and do some housekeeping such as discarding the changes made by the popup.
The difficult thing was finding out where to put the event hookup code. Then, it was just a question of using jQuery Mobile Widget afterclose event that is triggered when a popup is closed.
The right event to listen for in my case was the rendering of the popup content. In the Lightswitch designer add a postRender event handler and associate the afterclose event of the parent object (the popup itself):
Another weird SharePoint app bug happened yesterday. The solution was fairly easy once you know what’s going on, but it’s just weird altogether.
You have a custom app in your SharePoint 2013 App Catalog.
You want to add this app to a SharePoint site. You can’t find your app in the “From Your Organization” section when you click at “Add an app” in a site.
I first suspected that the current user doesn’t have permissions to add an app. However, the user is the site collection administration and thus has the permission to install an app.
Yet…a slight detail. The App Catalog site is, well, a SharePoint site. With its own permissions. And, by default, containing only the user who created the catalog in the first place (the system admin).
So, the current user, although a site collection admin, doesn’t have permissions to read from the app catalog. (This is the weird part, as I expected SharePoint to do the reading using a system account behind the scenes.)
Add the users that should be able to install your custom apps to the site permissions of the App Catalog site, with Read permission level. In my case it was “Dani Alves” (yes, I’m a Barcelona fan).
Now, the app is visible in “Your Apps” when you try to add it to a site. Yeah!