Monday, December 28, 2015

Agile & Waterfall Methodologies – A Side-By-Side Comparison

Agile & Waterfall Methodologies – A Side-By-Side Comparison

There’s a saying that goes “there’s more than one way to skin a cat.” Fortunately for cats everywhere, we’re not going to skin one. We’re simply applying this logic to software development.
There are several ways to develop software, two of the most prominent methods being waterfall and Agile. And as anytime there are two ways to go about something, a debate rages about which is best. Does it matter really? Doesn’t either way give you a product (or, well, a skinned cat)?
We’ll let you decide. Today, we’re arming you with information about both waterfall and Agile methodologies so that you can make an informed decision as to what you think is best.
 What is the waterfall methodology?
Much like construction and manufacturing workflows, waterfall methodology is a sequential design process. This means that as each of the eight stages (conception, initiation, analysis, design, construction, testing, implementation, and maintenance) are completed, the developers move on to the next step.
As this process is sequential, once a step has been completed, developers can’t go back to a previous step – not without scratching the whole project and starting from the beginning. There’s no room for change or error, so a project outcome and an extensive plan must be set in the beginning and then followed carefully.
 Advantages of the Waterfall Methodology
1. The waterfall methodology stresses meticulous record keeping. Having such records allows for the ability to improve upon the existing program in the future.
2. With the waterfall methodology, the client knows what to expect. They’ll have an idea of the size, cost, and timeline for the project. They’ll have a definite idea of what their program will do in the end.
3. In the case of employee turnover, waterfall’s strong documentation allows for minimal project impact.
 Disadvantages of the Waterfall Methodology
1. Once a step has been completed, developers can’t go back to a previous stage and make changes.
2. Waterfall methodology relies heavily on initial requirements. However, if these requirements are faulty in any manner, the project is doomed.
3. If a requirement error is found, or a change needs to be made, the project has to start from the beginning with all new code.
4. The whole product is only tested at the end. If bugs are written early, but discovered late, their existence may have affected how other code was written.
Additionally, the temptation to delay thorough testing is often very high, as these delays allow short-term wins of staying on-schedule.
5. The plan doesn’t take into account a client’s evolving needs. If the client realizes that they need more than they initially thought, and demand change, the project will come in late and impact budget.
 When should you use waterfall methodology?
1. When there is a clear picture of what the final product should be.
2. When clients won’t have the ability to change the scope of the project once it has begun.
3. When definition, not speed, is key to success.
 What is Agile?
Agile came about as a “solution” to the disadvantages of the waterfall methodology. Instead of a sequential design process, the Agile methodology follows an incremental approach.
Developers start off with a simplistic project design, and then begin to work on small modules. The work on these modules is done in weekly or monthly sprints, and at the end of each sprint, project priorities are evaluated and tests are run. These sprints allow for bugs to be discovered, and customer feedback to be incorporated into the design before the next sprint is run.
The process, with its lack of initial design and steps, is often criticized for its collaborative nature that focuses on principles rather than process.
 Advantages of the Agile Methodology
1. The Agile methodology allows for changes to be made after the initial planning. Re-writes to the the program, as the client decides to make changes, are expected.
2. Because the Agile methodology allows you to make changes, it’s easier to add features that will keep you up to date with the latest developments in your industry.
3. At the end of each sprint, project priorities are evaluated. This allows clients to add their feedback so that they ultimately get the product they desire.
4. The testing at the end of each sprint ensures that the bugs are caught and taken care of in the development cycle. They won’t be found at the end.
5. Because the products are tested so thoroughly with Agile, the product could be launched at the end of any cycle. As a result, it’s more likely to reach its launch date.
 Disadvantages of Agile Methodology
1. With a less successful project manager, the project can become a series of code sprints. If this happens, the project is likely to come in late and over budget.
2. As the initial project doesn’t have a definitive plan, the final product can be grossly different than what was initially intended.
 When should you use Agile methodology?
1. When rapid production is more important than the quality of the product.
2. When clients will be able to chance the scope of the project.
3. When there isn’t a clear picture of what the final product should look like.
4. When you have skilled developers who are adaptable and able to think independently.
5. When the product is intended for an industry with rapidly changing standards.
  Both the Agile and waterfall methodologies have their strengths and weaknesses. The key to deciding which is right for you comes down to the context of the project. Is it going to be changing rapidly? If so, choose Agile. Do you know exactly what you need? Good. Then maybe waterfall is the better option. Or better yet? Consider taking aspects of both methodologies and combining them in order to make the best possible software development process for your project.

Why Go Agile Method of Development


Why Go Agile? Understanding the Benefits of the Agile Method of Development

The emergence of the “Agile” development method lays in the real-life project experiences of professionals working in the IT field, specifically with the challenges and shortcomings of the more traditional “Waterfall” method. With this approach, development teams have found a way to nullify the negative aspects of traditional development's rigidity and lengthy, expensive timelines.
A comparison of traditional "Waterfall" development method, versus the "Agile" technique.

What is the Agile Approach?

Agile is a lean and effective model for the successful development of various technical solutions including: websites, web applications, software, and mobile applications. Agile methodologies are many, but most common include the following steps. 

Discovery / Analysis

Before beginning development, it is essential to understand the client’s background, business goals, and product vision. Agile projects comprise a series of initial discovery sessions and research to attain a deep understanding of the client's goals, challenges, business climate, customers, and users. These sessions include key development team members (project managers, developers, and designers) and the client to gain a uniform and shared understanding of the project scope and outcomes. 

Planning / Prioritizing

After the discovery sessions, the client and development team works together to build a high-level product backlog. This backlog includes a prioritized wish list of features that will be useful to the client and their users. The priority will determine the order in which the features are elaborated, developed, tested, and delivered. Consequently, the team builds a development timeline centered on delivering the highest value features before moving on to lower-value ones.

Designing

Once the client’s vision is thoroughly grasped, the design team begins imagining the finished product. Some design phases include the creation of wireframes, technical specifications, and visual designs, which are approved by the client and development team over many iterations. 

Building

After design and technical requirements are finalized, developers build the product in its determined format. Using the prioritized feature list in the planning stage, developers add functionality in short development “sprints,” allowing the client to test each feature as they are completed and while development continues.

Testing

Product testing and quality assurance continuously takes place to detect and resolve defects. It’s also possible to test working software in a demo environment with real users. Feedback is immediately incorporated to improve the product. These cycles of iterations continues till the whole product is delivered and can even continue after the product launches to the public.

Advantages of Agile Development

Client-Developer Collaboration

enables team to truly understand the client’s vision
The stakeholders also gather enough faith on the gain on the team’s ability to deliver high-quality working software as they can see tangible results. Thus a firm-client-customer relationship can be built, which can encourage successful development and future profitable business

User Satisfaction

A 1995 study of over $37 billion USD worth of U.S. Defense Department projects inferred that 46% of the software did not meet the real needs and another 20% required rework to be usable. 
Agile’s continuously evolving planning, execution and feedback loop enables the team to align the software with desired business needs. The final product that comes out of the Agile methodology, better addresses the business and customer needs. 

Development Gets Done

Agile’s method of breaking development into small, feature-focused iterations makes it less cumbersome and overwhelming, allowing for more “quick wins.” 
According to the Standish Group's famous CHAOS Report of 2000, 25% of all projects simply fail through subsequent cancellation, with no useful software developed. A study in the UK showed that of 1,027 projects using Agile, 87% were completed.

Fast and Cost-Effective

According a to survey result, Agile software development practices delivered in up to 50% less time with a higher degree of client and customer satisfaction
Development sprints allow the features to be delivered quickly and frequently, with a high level of predictability. The cost of each can be easily predicted as they are calculated through the amount of work that can be performed by the team in the allotted schedule. The client can understand the approximate cost of each feature and thus decide upon the priority of features and the need for additional iterations.

Other Advantages

  • Its fluidity and openness allows teams to be adaptable to the constant evolution of functional and technical landscape.
  • The focus is always on the speedy delivery of business value and cuts down the risksassociated with software development. 
  • Development iterations allow for teams to add features or shift priorities based on testing feedback.

Friday, December 18, 2015

Challenges in Office 365 development


Challenges in Office 365 development - and ways to address them

Over the last 2 years, I've spent quite a lot of time thinking about "cloud-friendly" SharePoint development - approaches and techniques which will work for Office 365, but also on-premises SharePoint deployments which need to be designed with the same principles. With my team now having now done at least 20 or so Office 365/SharePoint Online implementations (and counting), in this time we’ve established certain ways of working with Office 365. A good example is creating multiple tenancies to represent dev/test/production to help with our ALM processes, since Office 365 doesn't have the concept of a test environment. Sure, on the SharePoint side (my focus here) you could just create a special site collection - but that doesn’t give you the isolation needed across any global elements. Examples include things like user profiles, taxonomy and search. And since our clients often ask for minor customizations which interact with these areas, the “test site collection” idea just doesn’t cut it for us. Instead, we go with the multiple tenancy approach, and I’ve advocated this for a while for anyone with similar needs.
As I’ve previously discussed at conferences, our choice for most clients/projects is to create separate dev and test tenancies, and it generally looks something like this:
clip_image002
It’s not the most important point here, but we tend to use different Office 365 plan levels for these environments - most of our clients use the “Office 365 E3” plan in production, and we’ll ensure TEST is on the same plan, but reduce costs by having DEV use “SharePoint P2” (so no Exchange, Lync or Yammer Enterprise). This generally works fine, because most of our development work centers on the SharePoint side. But regardless of what you do with plan levels, it’s also true that some trade-offs come with using multiple Office 365 tenancies (of any kind) – recently I’ve been thinking about options to mitigate this, and these broadly can be categorised as:
  • Making dev/test Office 365 tenancies more like production
  • Finding ways to test safely against the production Office 365 environment
The next few blog posts will detail some techniques which can help here. In this first post I’ll discuss the problem space - some problems I see with current Office 365 development which might lead you to consider these approaches. But in the article series I’ll be discussing things like:
For now, let’s talk about some of the issues you might hit in Office 365 development.

Challenges which come with multiple Office 365 tenancies

When we talk about dev and test environments, implementation teams always have a need to make these as similar to production as possible. The more differences, the more likely you’re going to have a problem - usually related to invalid testing or defects which only become apparent in production. Unfortunately, I notice our Office 365 projects do have certain trade-offs here. We really do want the multiple Office 365 environments for dev/test/prod (with the way Office 365 dev currently works at least), but it can be hard to make those other environments closely reflect production. Here’s a list of things which might be different:
  • Typically, directory integration is configured in production, but NOT for other environments
    • In other words, users sign-in with “chris@mycompany.com” in production, but “.onmicrosoft.com” accounts are used in dev/test
    • [N.B. You might know this step as “implementing Azure AD Sync”, or “implementing DirSync” to use its previous name]
  • Lack of SSO to Office 365 for users logged on to the company network (which relates to the point above)
  • Lack of a full directory of users
  • User profiles are not synchronized from AD in dev/test environments
  • Lack of Yammer Enterprise
  • Lack of Yammer SSO
  • Different license types (e.g. E3/E4 in production, but something else in dev/test)

OK, but why should I care about these things?

Depending on what you’re developing, some of these things can definitely cause problems! Some tangible examples from our experience are:
  • It’s not possible to do end-to-end testing – we can’t see the “real” user experience, especially across connected services e.g. Office 365 and Yammer
  • The experience on mobile devices is different
  • Code sometimes has to be written a different way, or use a “mapping” in dev/test - especially anything around user profiles. For example, any kind of user name lookup/e-mail lookup/manager lookups and so on
  • Any integrations with 3rd party/external systems might not work properly if they use the current user’s details in some way (because a different identity is used)
  • Yammer – the lack of SSO means a couple of things:
    • Any standard usage e.g. Yammer web parts or Yammer Embed won’t “just work” - a login button is displayed, and the user has to supply secondary credentials to authenticate/get the cookie
    • Any Yammer API code might need a special “mode” – because you probably have Yammer SSO in production, but not elsewhere

What can we do about these challenges?

So, it would be nice if we could make this situation better. Many of the issues stem from the fact that dev/test environments don’t have identity integration and AAD Sync configured, and what’s generally getting in the way there is that standing up a dedicated URL domain and Active Directory often isn’t trivial. The good news is that the sub-domain/UPN suffix approach I’ll talk about in the next post allows you to get past this – regardless of how many Office 365 environments you have, all you need is one URL domain and one Active Directory. In dev/test for us, this means ONE URL that we registered at GoDaddy for all our clients/projects. We run the on-premises AD simply in a small VM, which runs on developer machines.
Once the domain integration is implemented, the next step is to implement AAD Sync to actually get some users from AD to Office 365.  It will run off a set of users you choose (perhaps you would group some users for each dev/test tenancy into different OUs), and will perform the step of actually creating the users in Office 365. You could then optionally assign them a license if you want this test user to be able to login and use Office 365 functionality, and actual authentication will happen in the cloud if/when the user logs-in. If you want to implement Yammer Enterprise and Yammer SSO, you can now do that too. Directory integration is a pre-requisite for both of these things, but having solved that problem without major platform/infrastructure headaches, these possibilities open up for our dev/test environments.

Summary

So that’s a little on the problem space. Ultimately, developing for Office 365 at enterprise level does have some challenges, but many can be overcome and we can still strive for robust engineering practices. The next few blog posts will cover some of this ground – I’ll add links to the list below as the articles get published:
Thanks for reading!

Managed Metadata in SharePoint 2010


Managed Metadata in SharePoint 2010 – a key ECM enhancement


Last week I did a talk on ‘Enterprise Content Management enhancements in SharePoint 2010’ at the UK SharePoint User Group. Since the talk was 70% demos simply posting the slide deck doesn’t really convey the discussion, so over the next 2 posts I’ll cover the same ground in written form. So this ‘ECM enhancements mini-series’ consists of:
Part 1: Managed Metadata in SharePoint 2010 – a key ECM enhancement (this post)
UPDATE: Part 1.5: Managed Metadata in SharePoint 2010 - some notes on the "why"
Part 2: ECM platform enhancements - Enterprise Content Types, Content Organizer, Scalability etc.
I want to focus on Managed Metadata first as it will be such a key ECM building block in SharePoint 2010.
Background
In SharePoint 2007, metadata was a huge blind spot – many organizations have a fundamental requirement to only allow certain ‘approved’ terms from a central list to be used as metadata. Broadly, the options were:
  • Use a choice or lookup field (scoped to web, or possibly deployed as Feature which can give broader reach but more maintenance problems)
  • Build a custom field type
  • Buy a vendor’s solution (which will involve a custom field type somewhere)
  • Attempt to simply guide authors to use the correct terms in a plain old textbox
Frequently, metadata terms are in a hierarchy which counts some of those options out. Otherwise the first and last options were lame/unsuitable across large deployments, and I can practically guarantee that any vendor or custom solution out there wouldn’t be as rich as a proper baked-into-SharePoint implementation. And this is what we’ve now got in SharePoint 2010 with the “Managed Metadata” capability – I wouldn’t say it covers all of the bases, but it can be extended easily. In my talk I joked that I couldn’t bear to do a talk without any code, and so showed how a notable hole in the metadata framework in can be plugged in 10 minutes flat by using the Microsoft.SharePoint.Taxonomy namespace. More on this later.
A key thing to note is that the new Managed Metadata field now exists by default on many core content types such as ‘Document’ – so it’s right there without having to explicitly add it to your content.
SharePoint 2010 - Creating the central taxonomy

An organisation’s taxonomy is defined in the Term Store Management Tool – this is part of the Managed Metadata service application, and can be accessed either from Central Administration or from within Site Settings. Permissions are defined within the Term Store itself. For my demo I “borrowed” the taxonomy from a popular UK electrical retailer, and added the terms manually (but note you can also import from CSV). The following image shows the different types of node used to structure and manage a SharePoint 2010 taxonomy, and also the options available to manage a particular term:
TermStore  
Adding site columns - making terms available for use 

In order for authors to be able to use the terms on a document library, a column needs to be created (most likely on the appropriate content types) of type ‘Managed Metadata’. There are 2 key steps here:
  1. Mapping the column to the area of the taxonomy which contains the terms we wish to use for this field:

    ManagedMetadataSiteColumn

    Some notes on this:
    • The node selected is used as the top-level node – if it has children, these values can also be used in this field.
    • Site collections can optionally define their own terms sets at the column level (i.e. leverage the authoring experience you’re about to see, but not just for organization-wide terms sets) rather than use the central one. This is labelled as ‘Customize your term set’ in the image above, and allows terms to be added when this radio button is selected. 
  2. Specifying whether ‘Fill-in’ choices are allowed (shown on lower part of above image):
    • First thing to note is that ‘fill-in’ choices are only possible when the ‘Submission Policy’ of the linked parent term set is set to ‘Open’. This provides centralized master control to override the local setting on the column.
    • When the “Allow ‘Fill-in’ choices” option on the column is set to ‘Yes’, we specify that authors can add terms into the taxonomy as they are tagging items - in taxonomic terms, this model is known as afolksonomy, meaning it is controlled by end users/community rather than centrally defined. Although the setting is quite innocuous, but this is hugely different in Information Architecture terms – typically it is often beneficial when content authors are trusted and capable and there is a desire to grow the taxonomy ‘organically’, perhaps because a mature one doesn’t exist yet.
    • I can imagine some document libraries may use both types (traditional taxonomy and folksonomy). One column is understood to be more controlled, the other free and easy. With some custom dev work on the search side, it would probably be possible (definitely if you have FAST) to weight the more controlled field higher than the folksonomy field in search queries – thus providing the best combination of tagging and “searchability”.
The end user experience – web browser
Now that we have a managed metadata site column, when a user is tagging a document in an appropriate library they can either get a ‘type-ahead’ experience where suggestions will be derived from the allowed terms:
ManagedMetadataTypeAhead  

..or they can click the icon to the right and use a picker to select (e.g. if they don’t know the first letters to type):
ManagedMetadataPicker
The document is now tagged with an approved term from the taxonomy. Note that if the field allows fill-in choices (i.e. it’s a folksonomy field), this dialog has an extra ‘Add new item’ link for this purpose:
ManagedMetadataAddNewItem




The end user experience – Office 2010 client
Alternatively, content authors can tag metadata fields natively from within Office 2010 applications if they prefer. This can be done within the Document Information Panel, but also in the new Office Backstage view which I’m liking more and more. They get exactly the same rich experience – both type-ahead and the picker can be used just as in the browser:
OfficeBackstage
And it’s things like this which other implementations (e.g. vendor/custom) just typically do not provide.
So that’s the basics, onto some other aspects I discussed or demo’d.
Managed Metadata framework features
  • Synonyms – a term can have any number of synonyms. So if you want your authors to say, tag items with ‘SharePoint Foundation’ instead of ‘WSS’, you’d define the latter as a synonym of the former. In my television specifications demo, I added some phoney terms ‘Plasma Super’ and ‘Plasma Ultra’ to my preferred term of ‘Plasma’, and showed that in the user experience the synonyms show up (indented) in the type-ahead, but cannot actually be selected – the preferred term of ‘Plasma’ will always end up in the textbox:

    ManagedMetadataSynonymTypeAhead

    In case you’re curious as to equivalent picker experience, this shows synonyms in a ‘tooltip’ kind of way when you hover over the term.
  • Multi-lingual – for deployments in more than one language, the metadata framework fully supports the SharePoint 2010 MUI (Multi-lingual User Interface), meaning that if the translations have been defined, users can tag items in the language tied to the locale of the current web. The underlying association is the same as the value actually stored in the SharePoint field is partly made up of the ID.
  • Taxonomy management – as shown in the term store screenshot way above, terms can be copiedreused (so a term can exist in multiple locations in the taxonomy tree without being a duplicate i.e. in a ‘polyhierarchy’ fashion – a common requirement for some clients), deprecated (so no new assignments of the term can occur), merged and moved etc. In short, the types of operation you’d expect to need at various times.
    • I’d add a note that these are possible against terms in the taxonomy – the parent node types of term set and group (in ascending order) logically don’t have the same options, so if you make the beginner’s mistake of creating a term set when you really wanted a term with a hierarchy of child terms underneath, you have some retyping to do as you can’t restructure by demoting a term set to a term.  The key is simply understanding the different node types and ideally having more brain cells than I do.
  • Descriptions – minor point, but big deal. Add a description to a term to provide a message to users (in a tooltip) about when and how to use a term. This can be used to disambiguate terms  or otherwise guide the user e.g. “This tag should only be used for Sony, not Sony Bravia models”.
  • Delegation/security – permissions to manage the taxonomy are defined at the group level (top-level node), so if you wish to have different departments managing different areas of the tree, you can do this if you create separate groups.  Related to this, each term set can be allocated a different owner and set of stakeholders – this isn’t security partitioning, but does provide a place to specify who is responsible and who should be informed of changes at this level (in a RACI kind of way).
  • User feedback – if the term set has a contact e-mail address defined, a ‘Send feedback’ mailto link appears in the term picker, thus providing a low-tech but potentially effective way of users suggesting terms or providing feedback on existing terms.
  • Social – a user’s tagging activity will be shown in their activity feed
No doubt I’ve missed some – add a comment if any spring to mind please!

Extending the metadata framework – adding approval
So there are some great features in the framework, but one thing that seems to be ‘missing’ is the idea of being able to approve terms before they make it into the central taxonomy. So perhaps we want to allow regular users to add terms into the taxonomy quite easily, but only if they are approved by a certain user/group - this would give a nice balance between a centrally-controlled taxonomy and a true folksonomy. I put the word ‘missing’ in quotes just now because quite frankly, it’s pretty trivial to build such a thing based on a SharePoint list and that’s just what I did in my talk. I’m sure more thought would need to go into it for production, but probably not much more.
All we really need is to set up a list somewhere, add some columns, and add an event receiver. Adding an item to my list looked like this – I need to specify the term to add and also the parent term to add it under (using a managed keywords column mapped to the base of my taxonomy, meaning terms can be added anywhere):
TermToBeApproved
Then I just need some event receiver code to detect when an item is approved, and then add it to the term store:
   1: public class TaxonomyItemReceiver : SPItemEventReceiver
   2: {
   3:    public override void ItemUpdated(SPItemEventProperties properties)
   4:    {
   5:        if (properties.ListItem["Approval Status"].ToString() == "0")
   6:        {
   7:            string newTerm = properties.ListItem.Title;
   8:            TaxonomyFieldValue parentTerm = properties.ListItem["Parent term"] as TaxonomyFieldValue;
   9:  
  10:            TaxonomySession session = new TaxonomySession(properties.Web.Site);
  11:            TermStore mainTermStore = session.TermStores[0];
  12:            Term foundTerm = session.GetTerm(new Guid(parentTerm.TermGuid));
  13:            Term addedTerm = foundTerm.CreateTerm(newTerm, session.TermStores[0].DefaultLanguage);
  14:            mainTermStore.CommitAll();
  15:        }
  16:     
  17:        base.ItemUpdated(properties);
  18:    }
  19: }

My code simply finds the term specified in the ‘Parent term’ column, then adds the new term using Term.CreateTerm() in Microsoft.SharePoint.Taxonomy. Note the use of the TaxonomyFieldValue wrapper class – this is just like the SPFieldLookupValue class you may have used for lookup fields, as terms are stored in the same format with both an ID and label so this class wraps and provides properties.

Once this code has run, the term has been added to the store and is available for use throughout the organization – perhaps the best of both worlds. Amusingly, when we got to the “soooo, did it work?” bit in my talk the demo gods mocked me and the type-ahead on the term picker waited a full 10 seconds before the term came in, leading to a big “ooof……[pause]…..woohoo!” from the audience which capped off a hugely fun talk (for me at least).

SharePoint 2013 – my view on what’s new


SharePoint 2013 – my view on what’s new (particularly for developers)


So the SharePoint 2013 (previously known as ‘SharePoint 15’, the internal name) public beta is finally here. And that means that MVPs, TAP participants and other folks with early access are no longer bound by their non-disclosure agreements and can now talk about the product publicly. No doubt there will be a flurry of blog posts, but I wanted to write up my thoughts on what has struck a chord with me in the next version – partly because I have friends and colleagues who might look to me for this information, but mainly because it helps me crystallize my thinking on some of the new aspects. This started as a “developer perspective” article, but hopefully also gives a sense of what the new version brings for everyone.
If you’re a technical person, my view is that developers have a much bigger learning curve than IT Pros in this release. I might get flamed for that, and certainly IT Pros who need to deal with very large scale or need to know low-level detail on infrastructure topics might disagree, but that’s my view and I’m sticking to it :)
So this post represents my list – not in any order of importance.

Social

So it’s pretty interesting in the light of the Yammer acquisition, but yes, mature social capabilities are actually native in SharePoint 2013. Of course, they’re not quite the full hit that Yammer and Newsgator are, but my guess would be that the core SharePoint functionality will now meet many folks’ requirements – for the current generation of social intranets at least. I saw Steve Ballmer’s comments about the viral Yammer model too, but frankly I have to think that a big driver for the acquisition was to remove the “compete” element and bring Yammer into the fold.
The newsfeed (as it is now referred to - no longer “activity feed”) is much richer, and supports many of the enhancements I helped build for a client running SharePoint 2010 (see my post Extending SharePoint 2010 social capabilities). The newsfeed is now fully “two-way”, meaning items can be commented on, liked, and so on – so it’s a now a full Facebook-style feed:
SharePoint 2013 Newsfeed - large
As you might be able to tell from the image, some other capabilities include:
  • @mentions
  • #hashtags
    • These are searchable, and can be followed. Effectively the new hashtag bit has been merged with the existing SP2010 functionality of being able to follow a Managed Metadata tag.
  • Ability to ‘follow’ a site or specific documents
  • Groups:
    • In addition to posting to everyone, I can opt to post only to the members of a specific team site – which could be a specific community with the company
  • Ability to post links and pictures (with thumbnails automatically generated/used in the feed)
  • Document/media preview from within the feed
    • No need to open a new window or be taken out of context to get a feel for the document/video/whatever
  • Autocomplete:
    • The UI feels highly-usable – when I type @ to get a person’s name, or # for a hashtag, I get an autocomplete box which gives me categorized suggestions. In the case of an @mention, it initially gives me only people I’m following (left image), but as I type more and no results are found, it expands the search to all people (right image):

      @mention_autocomplete@mention_autocomplete2
There is also a ‘team’ newsfeed for keeping up with what’s happening in a particular team site. So effectively SharePoint 2013 has just about all of the social features (and a couple more) we built for one of my old projects (see Extending SharePoint 2010 social features). I’m happy to see that the feature set looks good.
As I’ve predicted before, the API has been substantially re-engineered and a lot of the cruft I talked about in my “how we did it” post has gone away. You’ll see an option in Central Admin (in My Site settings within the UPA) to enable ‘SharePoint 2010 activity migration’.

Everything is an app

Certainly an interesting move on Microsoft’s part - but now if you create a couple of lists in SharePoint, in the SharePoint UI that’s “an app”. I can understand where Microsoft are coming from – not every new user understands what a SharePoint list, or a view, or a content type gives them, but frankly even my parents know what an “app” is. Still, there will be a level of confusion for users familiar with earlier versions – and folks may need some hand-holding around this. But when you think about the future and the next 2 versions of SharePoint (e.g. to “SharePoint 2019”), maybe it’s a good thing and these are concepts which should be switched earlier rather than later.
AddAnApp
Of course, developers can create custom apps – that’s really the main point here, and it’s important since some of the challenges around getting code into an enterprise environment have arguably been addressed (more on this later). Here’s what the experience can look like – the site owner can install an app from the ‘Your Apps’ area:
AddAnApp
From then on, Joe User can access the app from the All Site Content page:
OpeningAnApp
With apps that give a full page experience, when he/she clicks on the app, they get taken to a completely different location which has a link back to the site which hosts the app. This is effectively the start page for the app:
COBAppDefaultPage

App marketplace – plus remote apps

The big news is SharePoint finally gets an “app store”, meaning it’s far easier to add small solutions to SharePoint which fulfil a particular need (e.g. a timesheeting app, some social extensions, whatever). Given the success of this model in the consumer phone space, and it’s integration into Windows 8, it would have been bizarre for SharePoint 2013 not to have this also. It’s pretty revolutionary, since several things come together to make it much easier to get a 3rd party customization onto a SharePoint environment – whether the person trying to acquire the app is a team site owner somewhere, or the core SharePoint admin team:
  • No need to have server administrators be involved with deployment/activation
  • Integrated payment/procurement:
    • Got budget? Well if the facility is enabled by server administrators, all you might need is a credit card! Of course, more streamlined payment options (e.g. set up company account details, pay with that) are also possible, though management teams everywhere will be pleased to hear there are a number of governance controls in this area
  • Regulated (by Microsoft) marketplace:
    • In the same way phone apps have to go through a certification process (involving checks on performance, legality, use of user interface controls etc.), the same will apply to SharePoint apps. It only gets into the public marketplace if all those boxes are checked, and many of the checks are performed by humans
  • Corporate marketplace option:
    • In addition to the public marketplace, SharePoint 2013 provides a framework to have a corporate marketplace – in other words, an internal app catalog where only apps approved for use within the organization are added. Whoever owns the SharePoint platform controls whether the public and/or corporate marketplace are enabled. In the case of SharePoint Online, this is decided on a tenant-by-tenant basis
  • App requests:
    • When governance controls are enabled, SharePoint provides a mechanism for end-users to “log a request” for a certain application from the public marketplace. Administrators can then track which apps are commonly requested, perform some internal validation, and then make them available in a controlled way.
  • Safety:
    • There’s no point in having apps if they can cause harm to the SharePoint environment. Any individual ‘rogue’ app should not compromise the SharePoint environment as a whole. Of course, Microsoft introduced support for this with sandbox solutions, and it was just strange that the app marketplace piece didn’t come with it – but still, the foundations were laid. The big issue of course, was that sandbox solutions were/are intentionally limited in their capability (to support the safety aspect), and developers constantly hit the limiter in terms of what they could implement. To a large extent, Microsoft have removed these constraints, in quite an innovative way – I discuss this lower down in the ‘App hosting options’ section
  • Upgrade framework:
    • Just in the same way we’re now used to updates to apps on our phone being pushed out by the vendor (and having the choice of whether we apply the update or not), a similar framework exists for SharePoint apps. This is great news for both users and vendors – it can mean quicker development cycles, a more efficient way of rolling out bug fixes and enhancements (e.g. it’s turns into at least 50% push, as opposed to 100% pull)
Needless to say, no doubt many vendors and developers with product ideas have been (or are about to start) scrambling to work out how they can leverage this. Of course, the marketplace won’t be empty during the beta phase – big name product vendors will have worked with Microsoft on the ISV TAP program. After all, an empty marketplace isn’t in anyone’s interest.

App hosting options – including remote apps with OAuth

So, I mentioned that many of the constraints around sandbox solutions have been removed in SharePoint 2013. Here’s how – apps can now run separately from the SharePoint environment itself. So if you have a need to run code which does some heavy processing (which could be shut down by SharePoint because it’s using too much CPU/memory/disk IO), then run it on a non-SharePoint server. In other words, take the problem outside of the SharePoint equation and deal with it there – the SharePoint admins (who might be Microsoft if you’re a SharePoint Online customer) are happy because the SharePoint performance/uptime is assured, and the developers (or the vendor selling the product) are now happy because they have a solution to the constraints of the sandbox. This means you can effectively build anything and target the sandbox or O365 – you just have to provide the resources (e.g. web hosting, processing power and if needed, data storage) elsewhere. This external “engine” can then talk to SharePoint (e.g. to read and modify data) using the client APIs such as CSOM and REST – these are *much* more capable now. OAuth is used for authentication, whereby the external app is granted a token to access a particular SharePoint site for a specified duration. Here are the top-level options:
  • Remote - Azure
    • A lot of work has gone into enabling this. This is known as an Azure auto-provisioned app, and is integrated into the app deployment process – the developer/vendor can ensure that any “external to SharePoint” infrastructure is created on Azure as the app is deployed in SharePoint. This could include a SQL Azure instance to store data (so in the sandbox we’re no longer constrained by having to store data in SharePoint lists within the site collection), and Azure processing capability to do any heavy-lifting. Of course, this needs to be scaled appropriately for the app to work well, but it does work well as far as SharePoint is concerned.
  • Remote - Developer-hosted
    • In this context, “developer-hosted” means “whoever is building the app supplies the hosting/infrastructure”. You don’t have to use Azure for any external portions of an app. In fact. you can use anything – another SharePoint farm, some other servers you have running IIS and SQL, Amazon Web Services, and more. By that, I mean this separation brings some interesting possibilities – since SharePoint doesn’t care about the processing that goes on here so long as you call into it using the client APIs, it’s effectively technology-agnostic. You could implement this part on a LAMP stack (Linux, Apache, MySQL, PHP) for all SharePoint cares – I nearly fell off my chair the first time I heard this! I’m not saying that particular aspect is hugely interesting to me personally – though it could certainly be relevant to product vendors, hosters or an organization with dev teams with different skills – but I do find it interesting that things aren’t restricted to IIS. And of course it does illustrate the separation of SharePoint and the remote piece of a remote app.
  • On-premise - SharePoint-hosted
    • If you don’t need any external processing/data storage, then an app can be purely SharePoint-hosted. Notably, this is not a sandboxed solution – that model still exists, but this is something different. Server-side code is *not* allowed in a SharePoint-hosted app, so all SharePoint code is CSOM or REST (in addition to the HTML, CSS and JavaScript elements). What’s really important to understand about this model is that any SharePoint artifacts required by the app (e.g. pages, lists, content types, and so on) do not live in the actual site collection where the app is installed – rather, they get created in a special “app web” on a separate web application which is isolated from the site where the app is installed. Visual Studio 2012 with SharePoint dev tools understands this architecture when you deploy and test your app.
Note that SharePoint gives support for extending the branding of the hosting site into the app site (remember, even if it is hosted in the same SharePoint farm, it is still a separate web application/site collection).
Of course, we still have the traditional development models too. So if you’re asked to build a customization in SharePoint and you can deploy code to the farm, you could potentially choose from:
  • Farm solution
  • Sandbox solution
  • SharePoint-hosted app
You’d have to decide whether your solution can be built using an app (i.e. whether the client APIs do what you need), and whether the app model is giving you anything (i.e. should users acquire the app from the marketplace).

Enhanced client APIs

No doubt partly to support apps, the client APIs have been given some love and are much extended. The Client Object Model (CSOM) has many new capabilities, and a new OData/REST API is introduced. My understanding is that both have the same capabilities, but the different programming styles are meant to provide choice – so if you’re coding in JavaScript or Silverlight you’d probably use CSOM, but if you’re talking to SharePoint from another platform (e.g. mobile app) then the REST API would be convenient. The early documentation listed the following as capabilities:
  • Existing CSOM capabilities, plus:
  • User Profiles
  • Search
  • Taxonomy
  • Feeds
  • Publishing
  • Sharing
  • Workflow
  • IRM
  • Analytics
  • Business data
  • ...and more
Note that the new REST API is known as _api since it can be accessed using the format http://somesite/_api/Web/title (with that example returning the title of the root web).

Use of .NET 4.5, but no MVC (within SharePoint at least)

Yes, of course the latest SharePoint is using the latest version of .NET. So we get a new GAC, new language features (e.g. web API, await/async, new HttpClient class etc.) and Visual Studio workflows are simpler, but the bigger impact is that if your code has to target both SharePoint 2010 and 2013, then you have some things to deal with (like avoiding the new language features in your codebase for one). If you follow developments outside of SharePoint in the wider ASP.Net world, it’s interesting that there is no option for using MVC and that a custom SharePoint page will continue to use the WebForms model – meaning Viewstate, postbacks for click events of .NET controls and so on. SharePoint developers the world over probably breathe a sigh of relief of not having to learn a new paradigm on top of a new product, but then pause to think if that’s ultimately a good thing. I don’t think it is personally. If you care that much, there *are* in fact options for surfacing MVC pages within SharePoint 15, but only in SharePoint ‘remote apps’ – and that stuff is far bigger for SharePoint than MVC vs. WebForms.

Use of Metro styling

Some new UI paradigms to learn, but on the plus side even I can style some colored blocks :)
TeamSite

Search-driven content

A capability some other CMS have, which I’ve wanted to see in SharePoint publishing sites for a long time, is the idea that a piece of content can effectively be surfaced in different locations (possibly with different branding/surrounding content), despite the fact that it is really just one piece of content. This content could be edited in one location. I might be showing my age here, but Content Management Server 2001/2002 kind of had this with connected postings, so it was annoying that nothing like it existed in the product for so long.
In SP2013, search is used to deliver this. That makes sense because search has long been the only way to “see past” the site collection boundary in SharePoint, and so this facilitates content sharing across site collections and web applications. You’ll find a new category of web parts called ‘Search-driven content’ – the web parts in here are all pre-configured variants of the Content by Search Web Part (e.g. “Items Matching a Tag”). The base Content Search web part can be found in the Content Rollup category. This is like a Content by Query Web Part on steroids, and using search:

SearchDrivenContent_WebParts
ContentBySearch_QueryHelper
Along similar lines,  any list/library in SharePoint can be nominated as a ‘catalog’ and then shared across site collections. This is part of the ‘Cross-Site Collection Publishing’ Feature and is also search-powered.

Skydrive Pro - Dropbox/Skydrive-like sync to PC, and simple sharing

One nifty feature that will go down well is that users can sync any document library to a folder on their PC, making offline access smoother – no client such as SharePoint Workspace is required. Additionally, SharePoint 2013 introduces the concept of ‘sharing’ – this is really the same permissions model we’re used to already, but with different semantics to hopefully makes things simpler for users. So I can ‘share’ a document library, rather than ‘grant someone contribute permissions’.

Apps for Office

An “app for Office” (known as an ‘Agave’ during pre-beta) is effectively a new form of “Office Add-in” – some good examples I’ve seen include a Bing map appearing next to some Excel rows containing addresses, or a Word Add-in which shows some SharePoint list items and allows them to be dropped into the document. Hopefully you get the idea. What’s interesting about it is that despite the client being Office, the development is done in web technologies such as JavaScript, CSS and HTML. It’s almost like an IFrame hosted in Office. This is kind of cool because there’s now a lot of potential for re-use across the browser and Office clients – i.e. I no longer need to learn Office APIs to target those clients.

AppFabric

AppFabric is a caching technology which exists as a standalone install (and has done for some time now – it doesn’t require Windows Server 2012). SharePoint installs it as a pre-requisite if it is not already present. AppFabric essentially joins together the memory of multiple web servers, and allows you to use this as a cache – in this way you don’t need to worry about different machines having different values in the cache, or having to use a CacheDependency or anything like that. Effectively you store name/value pairs and access them from anywhere. The cache can be divided into different areas (partitions), and there are options for making it highly available.

Summary

So, lots of new capabilities and lots of things  for developers to get to grips wit – and there’s lots I haven’t mentioned too. Like JavaScript templates (think jQuery templates for well-known SharePoint controls e.g. a list view) and use of Azure for workflow hosting. I’m sure good content on these can be found elsewhere (if not now, then soon).
Of course, this has been a technically-focused post and other folks will take a closer look at end-user enhancements. I’ll be covering a few technical areas in detail, as will many others, so it might be time to dust off the RSS reader and get on Twitter if you’re not there already

Using the Content Search web part


Using the Content Search web part (and understanding SP2013 search)


Articles in this series:
Meeting client requirements with SharePoint often involves aggregating items somehow – often we want to display things like “all the overdue tasks across all finance sites”, or “navigation links to all of the subsites of this area” or “related items (e.g. tagged with the same term)” and so on. In SharePoint 2010 there have been two main ways of accomplishing this:
  • Content Query web part
  • Custom solution built on SPSiteDataQuery (site collection-scoped), SPQuery (list-scoped) or search API
To a lesser extent, using the search web parts as part of a custom solution may also have been an option. Regardless, it was common to need custom code to meet such requirements. Maybe we needed to add paging to the results, or we needed to use some value obtained dynamically through code (e.g. from the current site/current page/current user/something else) – several Codeplex solutions arose from this gap, and lots of lines of code were written.
SharePoint 2013 presents the Content Search web part as a new option – it’s capabilities mean that simply using the web part (with some front-end work to meet look and feel requirements) will meet many needs, withoutuse of custom code. If you’re a developer, the following screenshot should give you a clue as to why code won’t be required too often (with one of my favorite options highlighted):
CSWP_BasicsTab_AdvancedMode_PropertyFilterValues
It’s incredibly powerful, and it’s a good idea to understand what it can do.

Understanding the deal with search-based solutions

As the name suggests, the Content Search web part is powered by SharePoint’s search function. As such, there are the following considerations:
  • The CSWP can be configured to “see” items anywhere in SharePoint (potential advantage)
    • In contrast, the CQWP and related SPSiteDataQuery can only search within the current site collection - the site collection “boundary” is a factor
  • Results shown are not guaranteed to be 100% up-to-date (potential disadvantage)  
    • Since a search crawl has to run before any content changes will be shown in search results (remember this can include titles, summaries, images and so on for pages/documents), if a user creates/edits an item it will not be shown immediately. This can be a critical point.
    • Furthermore, my understanding from a FAST engineer is that in SharePoint 2013 there is no longer any means of pushing a document directly into the search index – in previous FAST incarnations including FAST for SharePoint 2010, there were options such as docpush.exe for “proactively” add an item to the index, rather than waiting for the next search crawl.
    • That said, it should be possible to obtain much lower indexing latencies in SharePoint 2013 via the “Continuous Crawl'” capability. In most deployments, my guess would be that changes would be reflected within a few minutes at most if this is enabled (where previously you may have had an incremental crawl scheduled every 15, 30 or 60 minutes for a SharePoint sites content source.
Summary - if the functionality you are creating needs fully up-to-date results (e.g. a user has created/edited something and it needs to be *immediately* reflected in the site) then you will probably need to stick with the original approaches (i.e. a query-based rather than search-based solution).

Terminology – new concepts in SharePoint 2013 search

So if we’re going to build solutions built on SP2013 search, we need to have a basic understanding of some concepts – we’ll run into these time and time again:
Concept
My quick definition
Result SourceLike a search ‘scope’ in SP2007/SP2010, but on steroids. Rules are specified to say what the scope consists of – e.g. DOCUMENTS in my TEAM SITES area (constraining on content type and path in this example).

Created centrally, or at the web level. Result Sources can be used in just about any search-related functionality, including the Content Search web part.
Query RuleLike a ‘best bet’ on steroids. Ability to do specially formatted results at top of results list (e.g.Promoted Result) for highly-recommended content. In addition to Promoted Result, we can also do a Result Block (example could be a block of 5 image results within main list of text links).

Another option is to Change the Ranked Results – i.e. put something at the top, promote ordemote something by 1-10 (previously known as a ‘boost’ in FAST) 

LOTS of flexibility in matching the user’s query, including regular expressions and matching terms in the Managed Metadata store.
Display TemplatesA Display Template is a JavaScript template (similar to jQuery templates) which controls formatting – in the case of the CSWP, this effectively replaces the use of XSL for look and feel. There is a separate template to pick for the overall control and formatting of an individual item. The .js files for the templates are stored in the ‘Content Web Parts’ subfolder of the Master Page Gallery. 

Side note – in the context of a search results page (rather than CSWP), a Display Template is associated with a Result Type (e.g. Word doc, wiki page, PowerPoint file etc.) and so we have granular control over how each is displayed (and when). Extremely cool.
So, lots of flexibility in the search infrastructure. Let's see some of this in the context of the Content Search web part.

Configuring the Content Search web part

There are two main aspects to this:
  • Displaying the right items (Search Criteria)
  • Look and feel (Display Templates)
In terms of the search criteria, there is *enormous* flexibility in what the CSWP - and the underlying search capability - can do. For one thing, it’s possible to either directly configure the query entirely in the properties of this web part instance (e.g. show me all documents which meet criteria X), and/or start from a pre-existing Result Source to do some of the filtering. Combining the approaches will be fairly common – an example could be “search only on wiki pages” (an OOTB Result Source) but only show items tagged with X (this defined directly in the CSWP properties).
Interestingly, configuring a centralized Result Source and a Content Search web part on a page are very similar, even though it would seem some sort of “reusable scope” and a web part are very different things in SharePoint. The overlap comes because underneath both there is a search query which does the work of isolating the desired results - indeed, as we'll see later the same “Query Builder” UI is used in both places (with a couple of minor differences). So, if you’ve learnt how to configure a CSWP you’ve essentially also learned how to create  a custom Result Source.


Configuring the web part

The first thing to understand is that the Content Search web part appears in different guises in the web part gallery. The ‘main’ web part is in the ‘Content Rollup’ category:
CBS_MainWebPartInAdder
But there are also many pre-configured versions available, each of which finds a specific type of content. This is great for end-users who don’t necessarily think in terms of needing a ‘Content Search’ web part:
CBS_WebPartsInAdder
And just to prove the point, the web parts above correspond to the following .webpart definition files in the Web Part Gallery:
CBS_WebParts
Once the web part has been added to the page, it can be configured by it’s tool pane. The main configuration item is the query to use, and this can be started by clicking the ‘Change query’ button:
CSWP_properties 
This opens the "’Build Your Query” dialog – this has tabs labeled BASICS, REFINERS, SORTING, SETTINGS and TEST. This thing is known (unsurprisingly) as the Query Builder – what you might not realize, is that it’s used in several places in SharePoint 2013:
  • Configuring a Content Search web part (obviously)
  • Creating a Result Source (specifically in the Query Transform section)
  • Configuring a Search Results web part
There are some differences – for example, when configuring a Search Results web part there is no SORTING tab because this will be handled in the Result Source or the query. I’m going to talk about things from the perspective of the Content Search web part, but will call out any differences for the other usages – so hopefully by learning the CSWP, you also get to learn 75% of the search infrastructure.

BASICS tab – Quick Mode

Although the first tab is labeled ‘BASICS’, I’d say it’s actually the most involved - this is where the query itself is configured, and there is a ‘Quick Mode’ and ‘Advanced Mode’. You’ll also notice that - and let me just say I’d personally be willing to give the Product Manager for this feature A BIG HUG for this - that there’s a “live” results preview pane, permanently visible on the right-hand side of the Query Builder. This shows the first 10 results which would display from running the currently configured search against the current index, without the need to save the web part after each change:
CSWP_BasicsTab_QuickMode
Note that if you create your own query, then this preview pane is only able to show results when you are on the TEST tab. And we’ll talk about that towards the end.
Let’s now walk through the various configuration steps in here.

Select a query

In Quick Mode, the dropdown contains the Result Sources (see my definition above if you’ve forgotten already :)) which come out-of-the-box with SharePoint 2013 – one of these may provide a good foundation for what you need:
CSWP_BasicsTab_QuickMode_SelectQuery 
As you select a Result Source from the dropdown, other options may become available lower down. So if I want to find items matching a specific content type, I get this:
RestrictByContentType 
In fact, this option to restrict by content type appears for many of the pre-defined Result Sources, not just “Items matching a content type” – which makes sense, because it’s a common thing to include as a filter. Similarly, “Items matching a tag” and several other queries give this interface for selecting a tag to filter on:
RestrictByTag
And, happy days, if I specify the tag by typing one I get auto-complete to help me pick the term – this is a fully-fledged Managed Metadata input field. Consequently there’s also full validation of the terms you type-in (though this takes a few seconds to show), so if an author accidentally enters something which isn’t a known term, he/she should spot the mistake immediately:
TermValidation
Consider also that those middle options of using the navigation term associated with the current page is exactly what’s needed to build many types of ‘related items’ functionality – again, no code needed now.

Restrict results by app

In the next section, I can restrict the scope of the results to a particular location (e.g. the current site). This enables me to get something like the Content Query web part behavior of only searching within the current site collection if needed – because although we now have the power, it won’t always make sense to go across the entire farm :)
RestrictByApp

Add additional filters

In the next section I can supplement the query with any valid query text, e.g. a property filter. In this example, I’m adding a filter to only present items which were created by the current user:
AdditionalFilter

Sort results

When we scope our query to a pre-defined Result Source (as we are here in the CSWP ‘Quick Mode’), then sorting is usually pre-defined at that level. The CSWP does give us the opportunity to override sorting based on based on some popularity ranking models (around most viewed/most clicked) instead though – expect proper wording to appear in this dropdown in the RTM version, but you get the idea: 
SortResults 
So what happens if none of the options presented so far do what you want? An example could be wanting to use an existing Result Source (e.g. ‘wiki pages’) but sort on Last Modified in descending order. Obviously the dropdown above does not allow that. We could create a custom Result Source and implement the query/sorting there, but that only really makes sense if we expect it to be re-used in multiple places.
In these cases, we can click into Advanced Mode (still on the BASICS tab).

BASICS tab – Advanced Mode

In Advanced Mode you basically get to specify the full query text yourself. In my mind, this is like building a solution with the search API in SP2007/SP2010 – I saw many custom solutions (and built several myself) which used the FullTextSqlQuery or KeywordQuery classes to find the right items. SharePoint 2013 makes it much easier to have this full control whilst still piggybacking onto the out-of-the-box web parts – meaning less work and more productivity.
When switching to the Advanced Mode, a couple of things become available:
  • A SORTING tab (details later)
  • Controls to help you build the query (which you’d previously do essentially by hand in earlier versions), with ‘Keyword filter’ and ‘Property filter’ options. These can be combined as you like, and the resulting query text appears in the textbox at the bottom:
CSWP_BasicsTab_AdvancedMode

Avoid custom code by using tokens

There are many tokens which can be used when building a query in this way – often you might want to pass something into the query, such as a URL (querystring) parameter, the value in a particular field on the page, and so on. Being able to do this unlocks a huge range of possibilities for building solutions. This is where the first image in this article comes from – here’s a reminder:
 CSWP_BasicsTab_AdvancedMode_PropertyFilterValues
In summary, when using the Advanced Mode of the query builder you should be able to target just about any content in your SharePoint environment.

SORTING tab (Advanced Mode only)

In SharePoint 2010 Enterprise Search, you could only sort by relevance/rank (the normal search engine approach) or date. FAST for SharePoint 2010 had more options (you could sort by a Managed Property). In SharePoint 2013, frankly the sort options alone are enough to blow your mind :)  If you don’t need anything specific around sorting then you can skip this bit, but if you do then here are your options:
First you can sort by way more things than just rank and date:
CSWP_SortTab
One thing to note there – I’m unclear as to what makes it into that ‘Sort by’ list and what does not. It’s not Managed Properties as far as I can tell, so although the list is long many options may not be hugely useful. Still, better than before.
Usefully, you can now do multi-level sorting (sort by this, then by that). The ‘Add sort level’ link in the image above adds another row, allowing me to do things like sorting by URL depth (so items higher up in the site hierarchy show at the top), and then by rank (that makes sense, because there’ll be lots of items at the same URL depth so I do need two levels of sorting):
CSWP_SortTab_Custom
Note that effectively what I’m doing here is building some sort of custom ranking model. This works great if I need something very specific on sorting, but also note SharePoint 2013 comes with several ranking models – the next section allows me to pick from these if I’ve left the ‘Sort by’ dropdown on ‘Rank’, unlike in the image above. This is because all these options are effectively different forms of rank – most are around People Search or popularity:
CSWP_SortTab_RankingModel
And for those occasions when the client is telling you that his/her strategic document really has to be on page 1 of the results (but not a Promoted Result/best bet), you have ‘Dynamic ordering’ – you can boost/demote results, including the option to promote to the top:
CSWP_SortTab_DynamicOrdering

REFINERS tab

In the context of search, refiners are usually the links on the search engine’s results page (typically in the left nav) which allow the user to further filter the results. So if I do a search for “meeting minutes” and get lots of results, it would be nice to be able to filter by, say:
  • Date range
  • SharePoint site (since minutes might be stored in individual project sites)
  • Author
  • ..and so on
However, in the context of the Content Search web part, refiners actually allow you to do this filtering as part of the initial query. The REFINERS tab is effectively a convenience to you, the person configuring the web part – what happens is that a search is performed whilst in edit mode, and all relevant refiners (e.g. managed properties) are presented as available refiners. These can be selected and moved over to the right-hand list:
CSWP_RefinersTab
The effect of this is that a further filter is added to my query. In the example above, this may be easier than using a Property Filter on the BASICS tab – since there I have little support, I just select the property and type the value:
CSWP_BasicsTab_PropertyFilter
In the REFINERS tab, SharePoint is doing the search for me (as it’s configured so far), and only coming back with values which have been found in the returned results.

SETTINGS tab

The SETTINGS tab controls some high-level options for running the search:
CSWP_SettingsTab

Query rules

Since these can be defined at the parent site or search service, it could be the case that your CSWP gets affected by one of these. As the radio button shows, this can be overridden, but consider that some types of Query Rules may not have an effect anyway - as a reminder (from the table at the beginning), a Query Rule can either:
  • Add a promoted result
  • Add a result block
  • Change the ranked results somehow (by modifying the query)
Out of these 3 actions, 1.5 of them could affect the results of a ‘default’ CSWP. This can be summarized:
Query Rule Action
Will affect CSWP results?
Add a promoted resultNot by default. When a search runs in SharePoint, multiple result sets are returned (e.g. ‘main results’, ‘best bet results’ and so on - in SP2013, the real names for these are ‘RelevantResults’, ‘SpecialTermResults’, ‘PersonalFavoriteResults’ and ‘RefinementResults’.). Although a CSWP can be configured to show any table, the default is ‘RelevantResults’ – and a promoted result gets added to ‘SpecialTermResults’.
Add a result blockYes if result block is configured to show ‘ranked within core results’ (the default), rather than ‘shown above core results’.
Change ranked resultsYes.
For completeness, here’s the place in the CSWP where you select which search result set to use (e.g. if you want to switch from the default of ‘RelevantResults’:
CSWP_ResultTableSelection

Options in the Results Table dropdown (shown to the left):

CSWP_ResultTableSelectionOptions

URL rewriting

This one is fairly simple – if results are being returned from a catalog which is using “friendly” URLs, then the CSWP can override this to use the original URLs. It may not always make sense to use rewritten URLs in aggregations outside of the catalog pages, especially if you’ve implemented anything funky there.

Loading behavior

This is useful – specify whether the CSWP web part instance should load in the main page load (default) or in an AJAX manner after the main page has finished. Considering that a CSWP could either be the centerpiece of your landing page or merely some page footer navigation, it’s nice to be able to prioritize in this way.

Priority

Similarly, we can actually specify High, Medium or Low priority for each CSWP instance we use – great for the different usages we will have, although as per the description, note this only has any effect if the search service is overloaded.

TEST tab

The TEST tab is hugely useful – it provides you the ability:
  • To see the underlying query text (in Keyword Query Language [KQL]) which has been generated (though it must be edited in other tabs)
  • To see the preview when you are defining a query yourself (the preview pane will be empty on other tabs in this scenario)
CSWP_TestTab_Less
Which is all great, but at first glance it’s easy to miss some extra functionality – if the ‘Show more’ link is clicked, other information becomes visible including details on any refiners and Query Rules which have been applied. So below I can see that a custom Query Rule I created has indeed been used, so there’s no guesswork on (for example) whether a certain item is actually being promoted or not:
CSWP_TestTab_More

Sidenote - listing items from ONE site/list/library with the Content Search web part

Worthy of a quick note - if all you need to do is roll-up content from one list/library, then you *can* do this with the CSWP – in the query, simply restrict the search using PATH:[URL to document library]. The Query Builder UI helps you do this by providing the ‘Restrict by app’ area: 

CSWPrestricttositeorlibrary_thumb2
N.B. note that one potential gotcha here can be that you need ‘HTTP’ if your sites are browsed on HTTPS but crawled on HTTP (as in my case).
If you do want to filter by site/list/library, consider of course that the good ol’ Content Query web part will work just fine here, *and* you’ll get instant changes as content is changed. What you won’t have, is the Content Search Web Part’s ability to automatically use tokens in the query (e.g. value of current navigation category, value from current user’s profile etc.)

Summary

The Content Search web part is a great tool in the SharePoint consultant’s box of tricks. Configuration may prove quite simple for some scenarios, but there is also huge amount of flexibility and so a certain degree of complexity comes with that. Many advanced scenarios which make use SP2013 search capabilities (such as Result Sources, Query Rules, promoted results and so on) will be possible – knowing the details will help you identify whether the CSWP can be the answer to a particular problem or not.
In this post, we looked simply at “displaying the right items” i.e. the search query aspect. In other posts, I’ll talk about: