Thursday, December 8, 2011

Enable & Deploy SharePoint Sandbox solution in farm running on Windows 7

Curious about SharePoint 2010 sandbox solutions. Started an empty SharePoint project and selected to deploy it as a sandbox solution on the prompt. Created two simple web parts (not visual web parts), and also added a feature event receiver to create a list at root web, then activated the feature. Got through solution built step successfully, but got an error message during deploy - "Error occurred in deployment step 'activate features' the sandboxed code execution request was refused because the sandboxed code host service was still initializing". 

Back to MSDN, and found this useful link to help me verify if my sandbox solution is ready in my farm, or not.

To summarize

In your central admin,
  1.    go to your System Settings -> Manage Services On Server, and make sure "Microsoft SharePoint Foundation Sandboxed Code Service" is showed as started. If not, start it.
  2.    go to Manage Applications -> Site Collection -> Configure Quotas and Locks, and scroll down to the bottom to check that your sandbox solution resource quota has a value in field "Limit maximum usage per day to". The default it 300.
 If the daily quota is exceeded, the sandbox solution in your farm will stop executing.

After above steps, deployment went through fine. 

Below are the things I learned about Sandbox, and are worth of noting down.
  • It is only good, if you need this solution to be ran in one site collection. You cannot run it across multiple site collections.
  • Sandbox solution runs in its own process - SPUCHostService.exe, SPUWorkerProcess.exe, instead of w3wp.exe. 
  • For artifacts that require to be deployed to the SP 14 (GAC) folder, sandbox solution usually does not work. However, learned lately that the Visual Web Part (the one that adds ascx file to the GAC) can now be used in Sandbox solution w/ Visual Studio SharePoint Power Tools. For application pages (aspx), they still cannot be used in a sandbox solution.

I also read about using full-trust proxy to allow a sandboxed solution call to a trusted assembly outside the sandbox. Sounds like a work-around here. Overall, not super impressed by this Sandbox feature in SharePoint.

Wednesday, December 7, 2011

SharePoint 2010 Search 101

SharePoint 2010 has three different types of searches. SharePoint foundation server search, SharePoint server search and SharePoint FAST search. The FAST search requires additional license = $$$.

Not sure how FAST search works. For SP foundation and SP Server searches, each has its own web app pool. By default, the app pools are named as GUID (yike). The search also has their own backend databases. One for crawl, one for application, and one for property store.

Here is a link to get search related PowerShell cmdlets.

Here is a microsoft link to show you how to remove URL from search, if needed. This can be very useful, when you want to hide any sensative information from searching.

After I installed my SharePoint 2010 and started all the services, I got errors in my event log to tell me that "The mount operation for the gatherer application has failed because the schema version of the search gatherer database is less than the minimum backwards compatibility schema version supported for this gatherer application." After done some research online, I found out that I needed to upgrade to SharePoint SP1. Here is a useful Micorosft link with some detailed steps for the upgrade. In order to find out if your environment requires an upgrade or not, you can run this PowerShell command - "(get-spserver $env:computername).NeedsUpgrade". If it returns TRUE, time to upgrade. After install SP1, need to run "psconfig –cmd upgrade –inplace b2b -wait".

All above just touched surface of SharePoint search. It is a BIG topic on its own. I recalled the days I served as server engineer to install TREX to crawl and index SAP documents. It was not an easy task. New to SharePoint 2010 search. I guess that the fun / pain has just started...

Sunday, October 23, 2011

Lessons learned to install SharePoint 2010 on Windows 7 Home Premium

The whole motivation to install SP 2010 server on my home laptop is to learn this product and experiment. It turned out to be a bumpy ride, and worth of noting down the steps. Besides, I am a strong believer in knowledge sharing. :)

First, there is a very good online reference to help you start the fun – "Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008" http://msdn.microsoft.com/en-us/library/ee554869.aspx). In this document, it didn’t mention the specific version of Windows 7 x 64 as the required operation system. I assumed that Windows 7 Home Premium x 64 will do, and in the end, it worked. Also, please take every prerequisite step listed in this reference seriously.

Couple things to point out that deviated my experience from this reference.
1) It mentioned about that "Chart Controls" and "SQL Server Analysis Services – ADOMD.net" are not required if you are going to install SharePoint Foundation 2010. This didn’t work for me. I downloaded SharePoint Foundation 2010 and skipped these two, and I still got the error - "Set up is unable to proceed due the following error(s): This product requires Windows Server 2008 SP2 and above…". After I installed these two prerequisites, and ran the setp.exe, instead of the SharePointFoundation.exe, I finally saw the lovely installation page.

2) Another thing that worth of mentioning is that – IIS 7 on Windows 7 Home Premium does not have Windows Authentication and Digest Authentication to allow you select as shown in this online reference. I thought this could be the roadblock, and it turned out to be fine (so far). :)

Please also be aware that the enterprise edition allow you add excel service, access service and visio services and etc… comparing to the standard one. You may want to compare two versions to understand which one fits your needs better - http://sharepoint.microsoft.com/en-us/buy/pages/editions-comparison.aspx

On the installation first page, I chose to install “Stand along”, instead of “Server Farm”. Then, towards the end of the installation, I got a lovely error by digging into this log file – "SharePoint Server Setup.log" file in your appdata's temp folder – "…dbwrap.exe' failed with error code: -2068643839. Type: 8::CommandFailed.". Researched the internet, blogs and forums, and found this reference – http://sharepointjungle.com/2010/03/06/install-sharepoint-2010-on-windows-7/. I didn’t find the same registry entry mentioned in this article, however this reference gave me some hint that the error could be around the SQL server connections. What I did is to enable the SQL server browser. Then I re-ran the SharePoint installation in the repair mode. It no longer choked, and ran all the way to the end, and gave me a successful message finally. YEAH!

However, it is not working yet. I tried to open the SP admin central page, and got a blank page. I went back to google. It turned out that I needed to turn off site anonymous authentication and turn on basic authentication instead. After this, I finally saw the SharePoint Central Administration page with all the nice icons / pictures. It was Saturday 11:30 PM. My husband said that I – "looked like lost the entire day." Being a geek's husband, he is used to me pulling my hairs all day long starring at my computer. Lol.

In a few months, I was able to build SharePoint sites using PowerShell. It is much easier to have a reusable script at hand without repeating the lengthy process described above. :o)

Friday, September 9, 2011

Dependency Injection (DI) – How and Why

There are two ways to implement dependency injection (DI) – through the class's constructor or its property setter. In class's constructor or the setter, instead of passing in a concrete class type parameter, use an interface. This way, you can later pass in different class objects that implement this same interface. The decoupling between the class constructor / setter and a concrete type provides the needed flexibility and scalability.

With this design, your classes are more unit-testable. You can create a mock class to implement the same interface as the real one, and pass in the mock object to the constructor or the setter. In the mock class, instead of connecting to a physical database server, you can set sample values as you wish. However, this would not eliminate the necessities to do your integration testing by connecting to the real database later.

Overall, the usage of DI will make your class handle less and less dependencies with concrete types, thus make it much more scalable.

understanding NULL Pattern

It is easy to miss a NULL check in our code especially when the code is not fully tested. Will it be nice to call one object's property or method without checking if the object is NULL or not without throwing NullReferenceException? The answer is to implement NULL pattern.

How?

Creat an interface for your real and null-check classes. In the real class, do real implementations. In the other null-check class, put in same signatures as the real one, but with no code, or return some default values. The callers would interact with the interfaces only. If the object is null, by calling its method, the null-check class will just do nothing or return default values. As a result, your code would have lot less IF tests for NULL references, and look more elegant and compact. 

However, doing null pattern will create more work initially. 

Here is the useful reference where I learned all about this - http://sourcemaking.com/design_patterns/null_object

Friday, August 5, 2011

understanding AJAX timeout settings for .Net application

Why bother? If you use the default Ajax timeout setting, which is 110 seconds (or 90 seconds for .Net framework 1.0/1.1), users may get the timeout error, especially when they are using the web application with a slow network connection.

In order to set the AJAX timeout settings, you would need to check three things in your code

1) add below code in the web.config file - for example to set it to 10 minutes (600 seconds).

- <httpRuntime executionTimeout="600" maxRequestLength="40960"/>

2) this above executionTimeout setting only applies if you set the "debug" attribute in the <Compilation> tag to "False" in the web.config. Here is the reference

- http://msdn.microsoft.com/en-us/library/e1f13641.aspx

3) you would also need to set AsyncPostBackTimeout in the aspx page (or the site.master page) to match with the setting in the web.config file.

- <asp:ScriptManager ID="ScriptManager1" runat="server" AsyncPostBackTimeout="600">
...
</asp:ScriptManager>

This is one type of hidden bugs, if your developer's or QC's network speed is faster than your end-users. :)

Of course, you would also need to watch out the timeout settings for your website and web services, if any.

Thursday, June 30, 2011

SSRS, ReportViewer, RDLC, and using object(s) as data source(s)

Lately, worked on a project to create some reports. Chose to use SSRS RDLC (local processing report) and decided to explore the usage of objects as data source. I am very pleased with what RDLC has to offer. It is super easy to bind objects (dlls) to reports. With RDLC, you no longer need to publish reports separately to a report server. Instead, reports become part of  application deployment. This certainly cuts down the deployment cycle time. ReportViewer control, ReportDataSource and ReportParameter classes are the main tools or libraries to interact with.  For one single RDLC, you can work with multiple data sources and parameters. Parameters can be any data types. String array is used to pass values to a parameter for multiple selections. When pass in datetime as a parameter, you don't need to do string convertion, as long as the string is well-formatted. It is trivia, but makes developer's life easier.

There are many benefits of using objects as report's data source. You can re-use your existing data service classes intended for presentation layer to retrieve the same data. You can use nested objects for your report and sub-reports. LINQ becomes available to allow you further filter or join data. The amount of code in SQL stored procedures can be significantly reduced. Code is more unit testable. However, it took me a while to figure out how to update the data sources after made changes to my objects. You would need to re-compile the dll, and then go inside of the RDLC to refresh each data source. The top level refresh button in the Report Data View panel is not working.

SSRS is my favorite reporting tool comparing to others. I have years experiences in using other reporting tools, like Crystal, SAP BEx, Visual Composer. The built-in functions in SSRS are extensive and easy to use. I could calculate anything anyway I wanted and where I wanted. There are always some tricks you can find to get the job done nicely ranging from formatting, sorting to aggregations either at group or detailed level. These reports have the same built-in UI comes with the ReportViewer to allow export data in many format - PDF, Excel, Word...

Overall, this alternative way of delivering canned reports makes development and deployment life-cycle relatively simpler comparing to the traditional server-based SSRS reports.

SSRS, ReportViewer, RDLC, and using object(s) as data source(s)

Lately, worked on a project to create some reports. Chose to use SSRS RDLC (local processing report) and decided to explore the usage of objects as data source. I am very pleased with what RDLC has to offer. It is super easy to bind objects (dlls) to reports. With RDLC, you no longer need to publish reports separately to a report server. Instead, reports become part of  application deployment. This certainly cuts down the deployment cycle time. ReportViewer control, ReportDataSource and ReportParameter classes are the main tools or libraries to interact with.  For one single RDLC, you can work with multiple data sources and parameters. Parameters can be any data types. String array is used to pass values to a parameter for multiple selections. When pass in datetime as a parameter, you don't need to do string convertion, as long as the string is well-formatted. It is trivia, but makes developer's life easier.

There are many benefits of using objects as report's data source. You can re-use your existing data service classes intended for presentation layer to retrieve the same data. You can use nested objects for your report and sub-reports. LINQ becomes available to allow you further filter or join data. The amount of code in SQL stored procedures can be significantly reduced. Code is more unit testable. However, it took me a while to figure out how to update the data sources after made changes to my objects. You would need to re-compile the dll, and then go inside of the RDLC to refresh each data source. The top level refresh button in the Report Data View panel is not working.

SSRS is my favorite reporting tool comparing to others. I have years experiences in using other reporting tools, like Crystal, SAP BEx, Visual Composer. The built-in functions in SSRS are extensive and easy to use. I could calculate anything anyway I wanted and where I wanted. There are always some tricks you can find to get the job done nicely ranging from formatting, sorting to aggregations either at group or detailed level. These reports have the same built-in UI comes with the ReportViewer to allow export data in many format - PDF, Excel, Word...

Overall, this alternative way of delivering canned reports makes development and deployment life-cycle relatively simpler comparing to the traditional server-based SSRS reports.

Wednesday, March 2, 2011

my MVC 3 journey

Went to Seattle last week to attend a very cool workshop. Exposed to many new terms, technologies and tools used in a demo project. Summarize some of them as below:

1) Razor (cshtml) - Wondering why we no longer saw any aspx files in that project, but only cshtml files. Well, welcome to the new Razor world. Did more reading about Razor afterwards. It looks very neat and compact. Here is one of the best blogs about Razor for beginners - Introducing "Razor" – a new view engine for ASP.NET from Scott Gu.

2) EF Code First - the coolest thing about this is that - you don't need to create database or tables beforehand. When you run your application first time, if it cannot find the database and the tables, it will create them based on your classe model definitions. Like a magic! Of course, you would need to wire the db connection in the web.config file. Further more, if you make changes to any classes (add / remove properties), you can also wire the application to allow drop and recreate the database in the Application_Start() event using "DbDatabase.SetInitializer". For sure, this only makes sense in the testing / development environment, where losing data is not an issue. One more tip, you can override default data types by using System.Data.Entity.ModelConfiguration, and method OnModelCreating().

3) Validation - The validations in MVC 3 look much simplier and less code comparing to what we are using right now. Microsoft takes care of both the client-side and server-side (in case the JavaScript is disabled in user's browser) validations for you. All you need to do is to add the decorations above your class properties like this - [Required(ErrorMessage = "This field XXXXX is required")]. Availabel built-in validations include "Required", "StringLength", "Range"... Cool!

4) QUnit - how can you do unit test if all the logics are moved into JavaScript. Qunit or Jasmine are used in the demo. Having tools like these is an absolute step-up in the JavaScript world.

Finally, here is the link to talk about all the related technlogies included in this project Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix , again from Scott Gu.

This is one of the best workshops. Feeling smarter now. ;)

Saturday, January 15, 2011

a good blog about generic and constrain it with "where"


James M. Hare's blog about constraining generics with "where" clause


The blog is thorough. Store it for future reference.

To summarize some key syntaxs as below:

- constrains to an interface or a base class

public class MyClass<T> where T : IMyInterface<T>

- constrains to reference type

where T : class

- Constrain to value type

where T : struct

- Constrain to required consturctor

Where T: class, new()

"Basically, the where clause allows you to specify additional constraints on what the actual type used to fill the generic type placeholder must support." In another word, "if the type meets the given constraint, you can perform the activities that pertain to that constraint with the generic placeholders."

This empowers you to write less code across types. Less is more.

Saturday, January 8, 2011

.Net 4.0 Framework and Lazy<T> - when lazier gets better

Found a couple good blogs abouot Lazy<T> in the new .Net 4.0 framework.

http://sankarsan.wordpress.com/2009/10/04/laziness-in-c-4-0-lazyt/

http://weblogs.asp.net/gunnarpeipman/archive/2009/05/19/net-framework-4-0-using-system-lazy-lt-t-gt.aspx

The key points are:

- When you create a new object, you no longer do "MyClass = new MyClass();". Instead, you do "Lazy<MyClass> = new Lazy<MyClass>();".

- For classes with no parameterless constructor, you need to do below
new Lazy<Pet>(delegate() { return new Pet("DuDu", "Siberian Husky"); });

- The Lazy<T> will NOT instantiate the instance until its value is accessed.

- Its default mode is NOT thread safe.
However, you can make it threadsafe by doing below constructors:

    1) public Lazy(LazyExecutionMode mode)
    2) public Lazy(Func<T> valueFactory,LazyExecutionMode mode)

What is the big gain here? Well, you no longer need to write your own code to implement the laziness as you did in .Net 3.5, or earlier. Instead, the constructor helps you to implement the lazy loading.

Is it cool to be lazier but get the same thing done.