Referencing CLR Projects in Visual Studio 2012

Visual Studio 2013

October 9, 2013 | Posted in Visual Studio 2012

When migrating your database projects from Visual Studio 2010 to Visual Studio 2012, you may run in to a problem. If you have a database project referencing a c# class library which is your CLR project, you’ll find the reference no longer resolves. Why?

Simply right-click the CLR reference in your database project, go to properties, and set “Model Aware” to true, and your issue will be resolved.

Read More →

Bing Geocode Service Returns The remote server returned an unexpected response: (417) Expectation Failed.


October 4, 2013 | Posted in .Net

I’d been using the geocode SOAP service for a while and all of a sudden I started to run in to the following error:

The remote server returned an unexpected response: (417) Expectation Failed.

To get around the around the issue, I added a simple setting before making the request to the service (in bold):

 private void GeocodeAddress(string address)
         string key = Properties.Settings.Default.BingMapsKey;
         GeocodeRequest geocodeRequest = new GeocodeRequest();

         // Set the credentials using a valid Bing Maps key
         geocodeRequest.Credentials = new GeocodeService.Credentials();
         geocodeRequest.Credentials.ApplicationId = key;

         // Set the full address query
         geocodeRequest.Query = address;

         // Set the options to only return high confidence results 
         ConfidenceFilter[] filters = new ConfidenceFilter[1];
         filters[0] = new ConfidenceFilter();
         filters[0].MinimumConfidence = GeocodeService.Confidence.High;

         // Add the filters to the options
         GeocodeOptions geocodeOptions = new GeocodeOptions();
         geocodeOptions.Filters = filters;
         geocodeRequest.Options = geocodeOptions;

         // Switch off 100 continue expectation
         System.Net.ServicePointManager.Expect100Continue = false;

         // Make the geocode request
         GeocodeServiceClient geocodeService = new GeocodeServiceClient();
         GeocodeResponse geocodeResponse = geocodeService.Geocode(geocodeRequest);

         if (geocodeResponse.Results.Length > 0)
             this._addressLatitude = Convert.ToString(geocodeResponse.Results[0].Locations[0].Latitude);
             this._addressLongitude = Convert.ToString(geocodeResponse.Results[0].Locations[0].Longitude);

Read More →

Scale out SQL Server by using Reliable Messaging


October 3, 2013 | Posted in SQL Server

How do large-scale sites and applications remain SQL-based? is a recent article from Michael Rys (Blog|Twitter) that highlights the relational SQL Server based web-scale deployment at MySpace. I have talked before about how MySpace uses Service Broker as a reliable messaging backbone to power the communications between +1000 databases, allowing them to scale-out and partition the user information into individual shards. Here are some more details about this architecture:

This new article uses the MySpace deployment as a case study to counter balance the claim that large web-scale deployments require the use of NoSQL storage because relational database cannot scale. BTW I know the SQL vs. NoSQL discussion is more subtle, but I won’t enter into details here. I think a good read on thattopic is NoSQL vs. RDBMS: Let the flames begin!.

Why is reliable messaging a key tenet of implementing a data-partitioned scale-out application? Consider a typical example of an modern web-scale application: users connect, get authenticated and view their own profile, but they are also interested in the status updates or wall messages from their network of friends or followers. It is easy to see how one partitions the user profile data, but how do you partition the user ‘walls’? User A is in a partition hosted on node 1, while user B is in a partition hosted by node 2, when User A update his status, how does this show up on User B’s wall?

One option is to have the application update the wall of User B when User A changes his status. But this prevents scalability, because now writes have to occur on many nodes and the application has to orchestrate all these writes. Think Lady GaGa updating her status, the application has to update the wall of every follower.

Another option is to have the application read the status from User A when displaying the wall of User B. This also doesn’t scale, because reads now occur on many nodes. All those Lady GaGa followers refreshing their wall page have to read from the one node hosting her status, and the node is soon overwhelmed.

A solution is to replicate the write on User A’s status onto User B’s wall. The application only updates the User A status, and the infrastructure propagates the this update to User B’s wall.

But traditional replication was historically designed for replicating entire data sets of fixed schema over static topologies, and it falls short of replicating web-scale deployments of hundreds and thousands of nodes:

  • it depends too tightly on physical location and it cannot adapt to rapid changes of topology (nodes being added and removed)
  • its based on schema defined filtering which is difficult to map to the complex application specific data routing conditions (this update goes to node 2 because User B follows User A, but that update goes to node 3 because User D follows User C)
  • its very sensitive to schema changes making application upgrade roll outs a big challenge

Messaging is designed with application-to-application communication in mind and has different semantics that are more friendly on large scale-out deployments:

  • Logical routing to isolate application from topology changes
  • Protocol versioning information allows side-by-side deployments making application upgrade roll outs possible
  • Data schema can change more freely as peers are shielded from changes by keeping the communication protocol unchanged

With messaging in place the application updates the status of User A and the drops a message into a local outbound queue to notify all friends of A. The reliable messaging infrastructure dispatches this message to all nodes interested and processing of this message results in an update of User B wall. MySpace uses Service Broker as the reliable messaging infrastructure, and they make up for the lack of publish/subscribe by adding a router-dispatcher unit: all messages are sent to this dispatcher and the dispatcher in turn figures out how many subscribers have to be notified, and sends individual messages to them. See the slides linked at the beginning of my post for more details. As a side note, I see that StackExchange is also embarking on the route of creating a message bus for their pub/sub infrastructure, but they use Redis as a message store, see asyncRedis await BookSleeve.

Robert Scoble had a post MySpace’s death spiral: insiders say it’s due to bets on Los Angeles and Microsofton which it claims that MySpace insiders blame their loss to Facebook to, amongst other things, the complexity of this infrastructure:

Workers inside MySpace tell me that this infrastructure, which they say has “hundreds of hacks to make it scale that no one wants to touch” is hamstringing their ability to really compete.

I do no know if the sources quoted by the article are right or wrong, and I was never personally involved with the MySpace deployment, but since I was so closely involved in building SQL Service Broker and as SSB is one of the critical components used by MySpace infrastructure, I am understandably interested in this discussion. Service Broker has a notoriously steep learning curve, there are no tools for administer, monitor and troubleshoot Service Broker deployments, and there is absolutely no support in the client programming stack (the managed .Net Framework). This is why all solutions that deploy Service Broker that I know of are large enterprise shops that are staffed with architects that understand the set of unique advantages this technology brings, and are willing to venture into uncharted waters despite the prospect of being impossible to hire development and ops talent with Service Broker expertise. But also the feedback I hear from these deployments is almost always positive: once deployed, Service Broker just works. Someone told me that the Service Broker solution they had was the only piece of technology that “did not break in the last 3 years” and that covers an upgrade from SQL Server 2005 to SQL Server 2008. See Is Service Broker in SQL Server 2008 compatible with the one in SQL Server 2005? to see how Service Broker helps address infrastructure upgrade problems. Personally I am not surprised that Service Broker turns out to be a cornerstone of the response SQL Server has to give to the NoSQL challenge, but this vitally unheard of technology (89 questions tagged service-broker on StackOverflow at time of writing this) will be quite a surprise answer for many

Read More →

Working Out Which Service Broker Queues Are Bound To Your Services


October 3, 2013 | Posted in SQL Server

select    [service].name, [queue].name
from [service]
join    sys.service_queue_usages [useage]
on        [service].service_id = [useage].service_id
join    sys.service_queues [queue]
on        [queue].object_id = [useage].service_queue_id
where    [queue].is_ms_shipped = 0

Read More →

Printing Blocks Of SQL


October 3, 2013 | Posted in SQL Server

–@sql contains the sql to be printed in segments
DECLARE @CurrentEnd BIGINT; /* track the length of the next substring */
DECLARE @offset tinyint; /*tracks the amount of offset needed */
set @sql = replace(  replace(@sql, char(13) + char(10), char(10))   , char(13), char(10))

WHILE LEN(@sql) > 1
IF CHARINDEX(CHAR(10), @sql) between 1 AND 4000
SET @CurrentEnd =  CHARINDEX(char(10), @sql) -1
set @offset = 2
SET @CurrentEnd = 4000
set @offset = 1
PRINT SUBSTRING(@sql, 1, @CurrentEnd)
set @sql = SUBSTRING(@sql, @CurrentEnd+@offset, LEN(@sql))
END /*End While loop*/

Read More →

Debugging Remote SQL Server Service Broker Configuration


October 3, 2013 | Posted in SQL Server

Run command prompt with the following:

cd C:\Program Files\Microsoft SQL Server\100\Tools\Binn

The syntax to then check two services on different servers is:

ssbdiagnose CONFIGURATION FROM SERVICE /text/initiator
-S InitiatorComputer -U InitiatorLogin -p !wEx23Dvb
-d InitiatorDatabase TO SERVICE /test/target -S TargetComputer
-U TargetLogin -p ER!49jiy -d TargetDatabase ON CONTRACT TestContract

Read More →

Stopping And Starting SQL Server Agent Using Batch Files


October 3, 2013 | Posted in SQL Server


@echo off


echo Completed at %date%_%time%
echo Holding result for 24 hours.
ping -n 1 -w 86400000 > nul



@echo off


echo Completed at %date%_%time%
echo Holding result for 24 hours.
ping -n 1 -w 86400000 > nul

Run both files as administrator.

Read More →

Disaster Recovery: What to do when the SA account password is lost in SQL Server 2005


October 3, 2013 | Posted in SQL Server

You may have faced the issue of losing the SQL Server SA password. Perhaps you followed the security best-practice of removing the  builtin\Administrators from the sysadmin server role,  and no one  you can find is in the sysadmin role.   At this point you may think that your only options are to reinstall SQL Server and attach the databases, or to  directly access the master database files, which may potentially damage the data.

SQL Server 2005 provides a better disaster recovery option for this scenario that is non-intrusive for master DB and that will help you preserve any objects and data stored in master DB (such as logins, certificates, Service Master Key, etc.) intact. Members of the Windows Administrators group now have access to SQL Server when SQL Server is in started in single-user mode, also known as “maintenance mode “.

Using the single-user mode, SQL Server 2005 prevents a  Windows Administrator to abuse this privilege to act on behalf of the sysadmin without being noticed. This allows Windows Administrator accounts to perform certain maintenance tasks, such as installing patches.

In order to start SQL Server in single-user mode, you can add the parameter “-m” at the command line. You can also use the SQL Server Configuration Manager tool, which provides proper controls for the file access and other privileges. To use the Configuration Manager tool to recover your system, use the following steps:

1. Open the Configuration Manager tool from the “SQL Server 2005| Configuration” menu

2. Stop the SQL Server Instance you need to recover

3. Navigate to the “Advanced” tab, and in the Properties text box add “;–m” to the end of the list in the “Startup parameters” option

4. Click the “OK” button and restart the SQL Server Instance

NOTE: make sure there is no space between “;” and “-m”, the registry parameter parser is sensitive to such typos. You should see an entry in the SQL Server ERRORLOG file that says “SQL Server started in single-user mode. After the SQL Server Instance starts in single-user mode, the Windows Administrator account is able to connect to SQL Server using the sqlcmd utility using Windows authentication. You can use Transact-SQL commands such as “sp_addsrvrolemember” to add an existing login (or a newly created one) to the sysadmin server role.

Run the following from command prompt:

sqlcmd -E -S LOCALHOST\MSSQLSERVER (hit enter)

1> EXEC sp_addsrvrolemember ‘DOMAIN\YourAccount’, ‘sysadmin’; (hit enter)

2> GO (hit enter)

Once the sysadmin access has been recovered, remove the “;-m” from the startup parameters using the Configuration Manager and restart the SQL Server Instance

Read More →

Scott Hanselman’s 2011 Ultimate Developer and Power Users Tool List for Windows


October 3, 2013 | Posted in Windows

Probably the most comprehensive list of useful utilities out there!

Thanks Scott.

Read More →

SQL Server Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.


October 3, 2013 | Posted in SQL Server

This is a classic issue whilst using SQL Server, where two processes are trying to lock the same resources at the same time. If you are handling this with some sort of retry mechanism like myself, try the following to slow SQL Server down before attempting the command again.

/* delay retry for a random number of seconds >= 4 and <= 9 */
declare @Random int, @Upper int, @Lower int, @time varchar(50);

set        @Lower = 4
set        @Upper = 9
select    @Random = ROUND(((@Upper – @Lower -1) * RAND() + @Lower), 0)

set @time = ’00:00:0′ + cast(@Random as varchar(50))


Read More →

SET NOCOUNT ON Improves SQL Server Stored Procedure Performance


October 3, 2013 | Posted in SQL Server

One of the biggest things that DBAs try to do on a daily basis is to ensure that their database systems run as fast as possible.  As more and more users access the databases and the databases continue to grow, performance slow downs are almost inevitable. Based on this, DBAs and developers should do everything they possibly can to keep performance related issues in mind early in the database lifecycle.  This is not always easy to do, because of the unknowns and the changes that occur over time, but there are some simple things that can be done and we will touch upon one of these in this tip.

Sometimes even the simplest things can make a difference.  One of these simple items that should be part of every stored procedure is SET NOCOUNT ON.  This one line of code, put at the top of a stored procedure turns off the messages that SQL Server sends back to the client after each T-SQL statement is executed.  This is performed for all SELECT, INSERT, UPDATE, and DELETE statements. Having this information is handy when you run a T-SQL statement in a query window, but when stored procedures are run there is no need for this information to be passed back to the client.

By removing this extra overhead from the network it can greatly improve overall performance for your database and application.

If you still need to get the number of rows affected by the T-SQL statement that is executing you can still use the @@ROWCOUNT option.  By issuing a SET NOCOUNT ON this function (@@ROWCOUNT) still works and can still be used in your stored procedures to identify how many rows were affected by the statement.

Microsoft even realized the issue that this creates and has changed the stored procedure templates from SQL Server 2000 to SQL Server 2005.

Here is the old template style available in SQL Server 2000 without the SET NOCOUNT ON.

– =============================================
– Create procedure basic template
– =============================================
– creating the store procedure
FROM sysobjects
WHERE name = N’<procedure_name, sysname, proc_test>’
AND type = ‘P’)
DROP PROCEDURE <procedure_name, sysname, proc_test>

CREATE PROCEDURE <procedure_name, sysname, proc_test>
<@param1, sysname, @p1> <datatype_for_param1, , int> = <default_value_for_param1, , 0>,
<@param2, sysname, @p2> <datatype_for_param2, , int> = <default_value_for_param2, , 0>
SELECT @p1, @p2

– =============================================
– example to execute the store procedure
– =============================================
EXECUTE <procedure_name, sysname, proc_test> <value_for_param1, , 1>, <value_for_param2, , 2>

Here is the new template style available in SQL Server 2005 with the SET NOCOUNT ON.

– ================================================
– Template generated from Template Explorer using:
– Create Procedure (New Menu).SQL

– Use the Specify Values for Template Parameters
– command (Ctrl-Shift-M) to fill in the parameter
– values below.

– This block of comments will not be included in
– the definition of the procedure.
– ================================================
– =============================================
– Author: <Author,,Name>
– Create date: <Create Date,,>
– Description: <Description,,>
– =============================================
CREATE PROCEDURE <Procedure_Name, sysname, ProcedureName>
– Add the parameters for the stored procedure here
<@Param1, sysname, @p1> <Datatype_For_Param1, , int> = <Default_Value_For_Param1, , 0>,
<@Param2, sysname, @p2> <Datatype_For_Param2, , int> = <Default_Value_For_Param2, , 0>
– SET NOCOUNT ON added to prevent extra result sets from
– interfering with SELECT statements.

– Insert statements for procedure here
SELECT <@Param1, sysname, @p1>, <@Param2, sysname, @p2>

As you can see even simple little things such as this can make an overall improvement for your database environment.  Stay tuned for other simple tricks and techniques to improve performance.

Original source:

Read More →

Creating An Entity Framework Connection String


October 3, 2013 | Posted in .Net

Not quite the same as the conventional ADO.Net connection string. Quick function to return the required format by Entity Framework:

Public Shared Function EntityFrameworkConnectionString() As String
Dim efConnectionString As String = New System.Data.EntityClient.EntityConnectionStringBuilder() _
With {.Metadata = “res://*”, _
.Provider = “System.Data.SqlClient”, _
.ProviderConnectionString = New System.Data.SqlClient.SqlConnectionStringBuilder() _
With {.InitialCatalog = “DatabaseName”, _
.DataSource = “ServerName”, _
.IntegratedSecurity = True
Return efConnectionString
End Function

Read More →

Creating A SQL Server Table, The Easy Way


October 3, 2013 | Posted in SQL Server

I wanted to create a quick table in my database to store some values so that I could later roll back the values in the existing table once I had finished some testing. Rather than creating a temporary table for the lifespan on my SQL connection, instead I created a table with this simple T-SQL statement:

select * into tblTempTable
from tblExisingTable


Read More →

Debugging SQL Server Service Broker Configuration


October 3, 2013 | Posted in SQL Server

Updated: 28/06/2019 :: Consider the following URL to see it helps in resolving messages not being received on the target due to a likely connection issue or if you are seeing “Service Broker received an error message on this conversation. Service Broker will not transmit the message” on the initiator >>>

Original post:

Having probed SQL Server Service Broker for some time trying to diagnose a fault with it’s configuration, I soon found a tool that would do this for me in the tools folder under the installation directory called ssbdiagnose.exe. Below is an example of how to query the configuration of your service broker setup (it will diagnose and return any faults):

Run command prompt (I ran this from machine as I had SQL Service 2008 installed which this tool is part of):

cd C:\Program Files\Microsoft SQL Server\100\Tools\Binn

C:\Program Files\Microsoft SQL Server\100\Tools\Binn>ssbdiagnose -u username -p password -s “server ip” -d “databasename” CONFIGURATION FROM SERVICE ”
http://fromservice” TO SERVICE “http://toservice” ON CONTRACT “http://contract”

The results may look something like the following:

Microsoft SQL Server 10.50.1600.1
Service Broker Diagnostic Utility
D  29997     Database1      Service Broker GUID is identical to tha
t of database Database1 on server
D  29975     Database1 User dbo does not have SEND per
mission on service http://someservice
D  29964     Database2 The master key was not found
3 Errors, 0 Warnings

C:\Program Files\Microsoft SQL Server\100\Tools\Binn>

It’s also worth noting that you can run ssbdiagnose against a SQL Server 2005 instance.

Read More →

Drop SQL CLR Assembly And It’s Dependencies


October 3, 2013 | Posted in SQL Server

Having looked for something that would drop an assemblies dependencies prior to dropping the assembly itself, I couldn’t find anything. So I put this together. Enjoy.

/* review schema information */
select  [schema].Name,
from    sys.assembly_modules [modules]
join    sys.objects [objects] with (nolock)
on        [modules].object_id = [objects].object_id
join    sys.schemas [schema] with (nolock)
on        [objects].schema_id = [schema].schema_id
where    [modules].assembly_class in (N’NAME’)

begin try

begin transaction;

/* drop clr dependencies */
declare @dynamicSQL nvarchar(max)

declare @schema nvarchar(200)
declare @name nvarchar(200)
declare @type nvarchar(200)

declare db_cursor cursor for
select  [schema].Name,
from    sys.assembly_modules [modules]
join    sys.objects [objects] with (nolock)
on        [modules].object_id = [objects].object_id
join    sys.schemas [schema] with (nolock)
on        [objects].schema_id = [schema].schema_id
where    [modules].assembly_class in (N’NAME’)

open db_cursor
fetch next from db_cursor into @schema, @name, @type

while @@FETCH_STATUS = 0
set @dynamicSQL = N”;
set @dynamicSQL = case @type
when N’CLR_STORED_PROCEDURE’ then N’drop procedure’ + N’ ‘ + @schema + N’.’ + @name + N’;’
when N’CLR_SCALAR_FUNCTION’ then N’drop function’ + N’ ‘ + @schema + N’.’ + @name + N’;’
when N’CLR_TABLE_VALUED_FUNCTION’ then N’drop function’ + N’ ‘ + @schema + N’.’ + @name + N’;’
else N’Object Not Recognised’

–print @dynamicSQL;
exec sp_executesql @dynamicSQL;

fetch next from db_cursor into @schema, @name, @type

close db_cursor
deallocate db_cursor

/* drop assembly once free of dependencies */
drop assembly [NAME]

if @@trancount > 0
commit transaction;

end try
begin catch

rollback transaction;
print ‘ERROR: ‘ + error_message()

end catch

Read More →

Team Foundation Service Burndown/Reports Aren’t Updating


October 3, 2013 | Posted in Team Foundation Server

If your TFS burndown chart and team reports such as Sprint Cumulative Flow aren’t updating, the likely hood is that the The Team Foundation Server Task Scheduler Service isn’t running on your Team Foundation Server. Under windows services (from admin tools or manage computer) look for Visual Studio Team Foundation Service Task Scheduler; start the service if it isn’t already running.

If it doesn’t update your burndown or reports straight away, there is a way to probe it’s current status and to run the scheduled tasks manually by using the Warehousecontroller. The windows service above is generally located under C:\Program Files\Microsoft Visual Studio 2008 Team Foundation Server\TFSServerScheduler which is where you’ll find a file called ScheduledServices.xml. In here you will find the warehouse controller URL (run this in a web browser) which can be used to probe the web services state along with other such queries.


The two that you probably want to use are called:


<WarehouseStatus xmlns=”″>Idle</WarehouseStatus>


<boolean xmlns=”″>true</boolean>

The latter will run the scheduled tasks (give it a few minutes and you should see that your report data and burndown charts have sprung to life).

For more information see

Read More →

How to select a value from an XML field in SQL Server


October 3, 2013 | Posted in SQL Server

Need to select some values quickly from an XML field in SQL Server?

Field.value(‘(/OuterNode//InnerNode/node())[1]‘, ‘varchar(50)’) as Value
from [schema].tblTemp with (nolock)

Read More →

Be agile!


October 3, 2013 | Posted in Agile

Read More →

Why can’t I add or move columns in SSMS designer view?


October 3, 2013 | Posted in SQL Server

SSMS is by default quite picky about what you can add or move without dropping and recreating a table. You can switch off this default behavior by unchecking the following option:

Read More →

How can I put a Service Broker message back onto the queue, if my processing fails?


October 3, 2013 | Posted in SQL Server

One of the questions that I had in my mind, when I started working with service broker is about the safety of my messages. Service broker is a reliable messaging platform. It is capable of delivering the message to the target without fail. But what if the target received the message but could not process it?

When a queue reader program reads a message from the queue with the RECEIVE command, the message will be removed from the queue. If the RETENTION setting of the queue is set to ON, the message is not really removed from the queue, but the status is updated to 0 to indicate that the message is read and processed. (We discussed RETENTION in an earlier post). In either case, we will not be able to receive a message for a second time.

To make sure that the messages are reliably processed, we need to put the RECEIVE command in a transaction. The processing of the message should be within the transaction and if an error occurs, the transaction should ROLLBACK. When the transaction in which the RECEIVE operation is executed, is rolled back, the message will not be removed (or updated to ‘read’) from the queue. The next RECEIVE command will fetch the message once again. The following code sample explains this.

-- local variables
        @MessageBody VARBINARY(MAX), 
        @MessageTypeName SYSNAME, 
        @ConversationHandle UNIQUEIDENTIFIER,
        @Response XML

    WHILE (1 = 1)
        -- start the transaction

        -- do a receive
        WAITFOR (
            RECEIVE TOP(1)
                @MessageTypeName = message_type_name,
                @MessageBody = message_body,
                @ConversationHandle = [conversation_handle]
                FROM dbo.QueueName
        ), TIMEOUT 60000

        -- no messages? let us quit
        IF (@@ROWCOUNT = 0)

        -- cast the messagebody to XML
        SET @XML = CAST(@MessageBody AS XML);

        -- check if it is the message you are looking for                
        IF @MessageTypeName = 'The-message-you-are-looking-for'
            -- execute the stored procedure to process the message
            EXECUTE YourProcedureThatProcessTheMessage @XML

            -- construct a response message
            SELECT @Response = '<construct-a-response-message/>'

            -- send a response back to the caller
            SEND ON CONVERSATION @ConversationHandle
                MESSAGE TYPE [Your-response-message-type](@response);

            -- end the conversation, if you dont need it anymore
            END CONVERSATION @ConversationHandle;

        -- commit the transaction
        -- the message will be removed from the queue
        COMMIT TRANSACTION            
    -- roll back the transaction
    -- the message will not be removed from the queue


Please note that this could create poison messages. When you rollback the transaction and do a RECEIVE, the same message will be fetched again. It could happen that the error can occur again and the transaction will rollback again. If errors like primary-key-violation occurs, every time the message is processed the error will keep occurring. This means we will end up with an infinite loop of RECEIVE, ERROR and ROLLBACKS. I will explain poison messages in the next post.

Tags: Service Broker

Jul 31 2008 3:20PM by Jacob Sebastian

Original Source:

Read More →

Debugging a SQL Server non-yielding scheduler issue


October 3, 2013 | Posted in SQL Server

If you have read the How To Diagnose and Correct Errors 17883, 17884, 17887, and 17888 whitepaper, you would know what a 17883 error reported in SQL Server 2000 means. If you read the “SQL Server 2005 – SchedulerMonitor” section of the whitepaper, you will understand that the way non-yielding messages are reported for a SQL Server worker thread is much more refined than it’s predecessor. We report a non-yielding worker thread as a Non-yielding Scheduler message in the SQL Server Errorlog. Since the whitepaper is very comprehensive in explaining SQL Server non-yielding detection algorithms, I shall not reiterate the same. In this post, I shall show you how to track down the cause of a non-yielding scheduler using public symbols in conjunction with the SQL Server Errorlog and Windows Event Logs.

Below is an excerpt from an Errorlog which is reporting a non-yielding scheduler message.

2011-07-24 22:17:45.850 Server       * BEGIN STACK DUMP:
2011-07-24 22:17:45.850 Server       *   07/24/11 22:17:45 spid 2388
2011-07-24 22:17:45.850 Server       *
2011-07-24 22:17:45.850 Server       * Non-yielding Scheduler
2011-07-24 22:17:45.850 Server       Stack Signature for the dump is 0x00000000000003B6
2011-07-24 22:17:53.380 Server       External dump process return code 0×20000001.
External dump process returned no errors.

2011-07-24 22:17:53.400 Server Process 0:0:0 (0×990) Worker 0x0000000005A6C1A0 appears to be non-yielding on Scheduler 1. Thread creation time: 12955600342903. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 0%. System Idle 99%. Interval: 70119 ms.

There are lines that I have colored differently in the above excerpt whose significance I shall detail later in this thread. The Windows Application Event Log will report this as an Informational message with the EventID =17883. This is what I have in my application event log for the above reported non-yielding condition:

Process 0:0:0 (0×990) Worker 0x0000000005A6C1A0 appears to be non-yielding on Scheduler 1. Thread creation time: 12955600342903. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 0%%. System Idle 99%%. Interval: 70119 ms.

The above message is the same message reported in the Errorlog excerpt above. Since the Errorlog doesn’t report the MDMP (dump file) number, you will have to track it down using the timestamp when the issue was reported or using the SQL Dumper log in case you are performing post-mortem analysis. The SQL Dumper log name is “SQLDUMPER_ERRORLOG.log” and it is located by default in the SQL Server LOG folder. You will find a message similar to the one shown below which will point you to the correct .mdmp file.

(2A30:2760) 07/24/11 22:17:52, ACTION, SQLDUMPER_UNKNOWN_APP.EXE, MiniDump completed: C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\LOG\SQLDump0025.mdmp

Now I need to use Windows Debugging Tools to load the .mdmp file. Note that the non-yielding message in the SQL Server Errorlog reported “External dump process returned no errors.”. This means that the dump generation routine didn’t encounter any errors. The most common error that occurs while generating dumps (due to resource crunch – memory or CPU), is a timeout. This means that the dump generation timed out and it is a delayed dump. So the information contained in the dump file may or may not be useful to perform a complete post-mortem analysis. This is when Microsoft CSS would ask you to collect additional data for the next occurrence if the current data available is not sufficient for a complete post-mortem analysis.

So let’s move on to the good part now… i.e. analysis of the dump. Keep in mind that this is a mini-dump which means that the is limited amount of information stored in the dump and we are using public symbols to analyze the dump. Even with private symbols that Microsoft CSS Engineers have access to, there might not be a whole lot they can decipher from a mini-dump and might request additional data!

When you load the dump using WinDBG, you will see the following information:

Loading Dump File [C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log\SQLDump0025.mdmp]
User Mini Dump File: Only registers, stack and portions of memory are available

Comment: ‘Stack Trace’
Comment: ‘Non-yielding Scheduler’

Debug session time: Sun Jul 24 22:17:52.000 2011 (UTC + 5:30)

The above tells you that this is a mini-dump for a non-yielding scheduler condition and the location from where you loaded the dump. Then I use the command to set my symbol path and direct the symbols downloaded from the Microsoft symbol server to a local symbol file cache on my machine. It also tells me the time when the dump was generated.

What is a minidump? 
A memory dump which contains the memory for the call stack of all threads, the CPU registers and information about which modules are loaded.

.sympath srv*D:\PublicSymbols*

Then I issue a reload command to load the symbols for sqlservr.exe. This can also be done using CTRL+L and providing the complete string above (without .sympath), checking the Reload checkbox and clicking on OK. The only difference here is that the all the public symbols for all loaded modules in the dump will be downloaded from the Microsoft Symbol Server which are available.

.reload /f sqlservr.exe

Next thing is to verify that the symbols were correctly loaded using the lmvm sqlservr command. If the symbols were loaded correctly, you should see the following output. Note the text in green.

0:019> lmvm sqlservr
start             end                 module name
00000000`00c60000 00000000`04820000   sqlservr   (pdb symbols)          D:\publicsymbols\sqlservr.pdb\6A3527657A3F4090AB2C96F7008A55C52\sqlservr.pdb
Loaded symbol image file: sqlservr.exe
Mapped memory image file: C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlservr.exe
Image path: C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlservr.exe
Product version:  10.50.1790.0

If symbols were not loaded, then you will see an output as shown below.

0:019> lmvm sqlservr
start             end                 module name
00000000`00c60000 00000000`04820000   sqlservr   (export symbols)       sqlservr.exe
Loaded symbol image file: sqlservr.exe
Mapped memory image file: C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlservr.exe
Image path: C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlservr.exe

Next I shall use the Thread ID noted above (0×990) in the Errorlog excerpt to switch to the thread and get the callstack for the thread which was reported as non-yielding.

0:019> ~~[0x990]s
00000000`76e06bda c3              ret
0:019> kL100



Looking at the callstack above, I can say that this is not the cause of the non-yielding condition (99.99999% true in most cases)!! The reason I can say this is because I find SwitchContext on the thread stack which means that the thread has yielded. The sequence of events is as follows: SQL Server detected a non-yielding worker thread. By the time, SQL Dumper was initiated to capture a memory dump of the SQL Server process, the thread yielded and the processing moved on to the next work request in the queue.

However, if your dump didn’t have the top functions as the ones noted above but was something like the two functions mentioned below, then you definitely have the right callstack and the issue noted below is due to stuck-stalled I/O.


The non-yielding scheduler message from the dump I investigated which had the above callstack had the following stuck-stalled IO warnings in the SQL Server Errorlog just prior to the dump getting generated:

SQL Server has encountered 218 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\tempdb.mdf] in database [tempdb] (2).

So now I definitely know that the non-yielding condition was due to faulty disk sub-system and the SQL Server instance at this point is just a victim of the issue!

If you see the the following message in the SQL Server Errorlog along with the dump generation message, then it is definitely a delayed dump and it could have some valuable information missing that might have been pertinent to your analysis.

Timeout while waiting for external dump

Most of the common non-yielding conditions are documented in the whitepaper that I referred to above. If you are unable to get a callstack for the non-yielding issue and the symptoms or sequence of events leading up to the issue doesn’t match any KB Article, then it’s time to call CSS to get the memory dump analyzed with the help of the private symbols that the CSS engineers have access to.

Happy debugging!!

KB 315263 – How to read the small memory dump files that Windows creates for debugging

Original source:

Read More →

Finding Filtered Rows In The UltraWinGrid.RowsCollection


October 3, 2013 | Posted in Infragistics

Need to find out how many rows have been filtered having applied a column filter?


Simply query the above property which will tell you how many rows there are as a result of the filter.

Hook in to the AfterRowFilterChanged event to perform the required action having changed the filter.

Read More →

Parsing/Displaying XML with XmlReader and XmlWriter


October 3, 2013 | Posted in .Net

Imports System.Drawing
Imports System.IO
Imports System.Text
Imports System.Windows.Forms
Imports System.Xml


Public NotInheritable Class XMLViewer

Public Shared Sub ViewXML(ByVal parentForm As Form, ByVal xml As String)

If ((parentForm IsNot Nothing) AndAlso (Not String.IsNullOrEmpty(xml))) Then

Dim xmlDisplay As New System.Windows.Forms.Form
Dim brokerXML As New RichTextBox()
Dim output As StringBuilder = New StringBuilder()
Dim ws As XmlWriterSettings = New XmlWriterSettings()

Dim formSize As New Size(CInt(parentForm.Width / 2), (parentForm.Height – CInt(parentForm.Height / 6)))

brokerXML.Multiline = True
brokerXML.Dock = DockStyle.Fill

xmlDisplay.Size = formSize
xmlDisplay.MinimumSize = formSize
xmlDisplay.Text = “XML Viewer”
xmlDisplay.StartPosition = FormStartPosition.CenterParent

ws.Indent = True

Using reader As XmlReader = XmlReader.Create(New StringReader(xml))

Using writer As XmlWriter = XmlWriter.Create(output, ws)

While reader.Read()

Select Case reader.NodeType
Case XmlNodeType.Element
Case XmlNodeType.Text
Case XmlNodeType.XmlDeclaration
Case XmlNodeType.ProcessingInstruction
writer.WriteProcessingInstruction(reader.Name, reader.Value)
Case XmlNodeType.Comment
Case XmlNodeType.EndElement
End Select

End While

End Using

End Using

brokerXML.Text = output.ToString()


If (xml IsNot Nothing) Then
xml = Nothing
End If

If (brokerXML IsNot Nothing) Then
End If

If (output IsNot Nothing) Then
output = Nothing
End If

If (ws IsNot Nothing) Then
ws = Nothing
End If

If (xmlDisplay IsNot Nothing) Then
xmlDisplay = Nothing
End If
End Try

End If

End Sub

End Class

Read More →

Accessing 32-bit DLLs from 64-bit code


October 3, 2013 | Posted in .Net

Migrating your 32-bit Windows application to a 64-bit machine can be problematic if you have 32-bit DLLs that you cannot re-write. Mike Becker shows you how you can access 32-bit DLLs from 64-bit code using built-in IPC mechanisms.

Originally published on DNJ Online, June 2007.

Microsoft’s 64-bit technology first appeared with Windows Server 2003 for Itanium 2 (also known as IA64 Architecture) and for eXtended technology CPUs (also known as x64 Architecture). It offers many advantages but also raises new issues for the software developer. For example, you may still need to access existing 32-bit DLLs from a 64-bit process.

A key advantage of 64-bit technology is its ability to address up to 8Tb of memory, against a maximum of 2Gb for 32-bit processes. As a result, 64-bit technology allows most data processing to take place in memory, without any need for temporary disk storage. This can considerably increase performance and open up new data processing scenarios. There are therefore good arguments for migrating current 32-bit software products to a 64-bit platform.

Many C or C++ applications are easy to migrated to a 64-bit platform, particularly if they are written in a monolithic fashion. Sometimes they just need to be rebuilt with an x64/IA64 compiler to run as native 64-bit applications. However distributed or module-based software can cause more problems.


The conflict: 64-bit versus 32-bit

A major migration issue concerns 32-bit software components which cannot be migrated, perhaps because the source code is lost or one of the dependencies cannot be migrated.

Your 32-bit software is still supported on a 64-bit platform as 32-bit processes can be executed inside the dedicated Windows on Windows’ (WOW64) subsystem which is part of all 64-bit Windows operating systems. However a 64-bit process cannot load a 32-bit module into its process space, and a 32-bit processes cannot load a 64-bit module into its process space. The only way that communication can happen between 32-bit and 64-bit modules is through interprocess communication (IPC). In other words, 32-bit and 64-bit processes can exchange data using IPC techniques such as out-of-process COM, sockets, Windows messages or memory mapped files.

A 32-bit software product contains the main module WeatherReport which calls into the DLL WeatherStationControl. As long as both the main module and the DLL are 32-bit processes the product can run on both 32-bit and 64-bit platforms (inside WOW64). If both the main module and the DLL are migrated to the 64-bit platform, then they can both run in a native 64-bit 
process. However if only the main module is migrated to 64-bit, it will not be able to load the 32-bit DLL.

The best way to migrate such a product to a 64-bit platform is to migrate both the main module and the dependency DLL, but if the dependency DLL cannot be migrated then it cannot be loaded into the 64-bit process and the application won’t work.

The solution: a surrogate process

This issue can be solved by loading the dependency DLL into a separate 32-bit process space. The main module, running as a 64-bit process, can then access the dependency DLL across the process boundary using IPC (see MSDN reference).

A 64-bit process can access a 32-bit DLL across a process boundary if the 32-bit DLL is loaded into a separate 32-bit surrogate process space, and the application makes use of the built-in IPC mechanisms that support data exchange between 32-bit and 64-bit processes.

This solution requires additional work as the 32-bit surrogate process that loads the 32-bit DLL and exposes its API must be created. Also, some changes will be necessary on the 64-bit side as the consumer must use one of the IPC techniques instead of directly accessing the 32-bit DLL. It is worth noting that, in extreme cases, this additional work could be comparable to the work involved in developing a 64-bit version of the 32-bit DLL from scratch.

One possible way of reducing these costs is to implement a 64-bit wrapper’ DLL that exposes the same functions, parameters, types and so forth as the original 32-bit DLL. This wrapper DLL can then make IPC-based calls to the original 32-bit DLL, which has been loaded into a surrogate process.

A 64-bit wrapper DLL (WeatherStationControl64.DLL) exports the same interface as the original 32-bit DLL (WeatherStationControl.DLL), so providing the same services to the main process 
(WeatherReport) without you needing to make any changes to the code of either the main process or the 32-bit DLL. This wrapper DLL delegates calls to the 32-bit DLL, which is running in a surrogate process, using IPC.

The main costs of this solution arise from implementing the surrogate process, loading the 32-bit DLL and implementing the 64-bit wrapper DLL. The actual cost depends on the IPC technique used to exchange data between the 64-bit and 32-bit processes.

COM as an IPC mechanism

One of most popular IPC techniques is DCOM (Distributed COM). Originally designed for distributed systems, DCOM is still supported on 64-bit Windows platforms, so both 32-bit and 64-bit COM modules can be built. The only limitation is that 64-bit and 32-bit modules cannot reside in the same process space, so they have to interoperate across process boundaries. This is done using out-of-process’ (OOP) COM components, in the following way:

  1. Create a 32-bit component implementing a COM object which loads and calls
    into the 32-bit DLL, and exposes the 32-bit DLL interface as a COM interface.
  2. Configure this COM components for OOP by either creating a standard COM+ OOP application (using dllhost as the surrogate process), or by implementing the COM component as a dedicated COM server process using, for example, an ATL COM
    server as hosting process or a Win32 service as a dedicated COM server.
  3. Create a 64-bit wrapper DLL which implements the same interface as the
    original 32-bit DLL, imports the COM interface of the COM object created above,
    translates current calls to the exposed interface into calls to the COM object interface, transfers the call parameters, receives return values and delegates
    them to the callers.

The 32-bit DLL (WeatherStationControl.DLL) is used by a COM object (WeatherStationWrapper) which exposes the interface of the 32-bit DLL as a COM interface. The 64-bit wrapper DLL (WeatherStationControl64.DLL) makes calls to this interface which are delegated to the original 32-bit DLL. The main process (WeatherReport) calls the interface exposed by the 64-bit wrapper DLL but is in fact served by the original 32-bit DLL.

This solution should be significantly less expensive than creating a 64-bit version of the 32-bit DLL from scratch. Microsoft’s ATL technology is supported by Visual Studio with wizards and ready-written code fragments which should also help lower migration costs by saving time and reducing the likelihood of errors.


There are, however, a number of things that you still need to keep in mind:

1. Alignment
The alignment of data in memory is different for 32-bit and 64-bit processes. This means that your more complicated custom data structures may be serialized by a 32-bit process in a way that is different to that expected by a 64-bit process, and vice versa. Microsoft Windows Platform
SDK includes documentation about the differences in memory data alignment between 32-bit and 64-bit processes.

2. Data types
In most instances, 64-bit Windows uses the same data types as the 32-bit version. The differences are mainly in pointers which are 32 bits long in 32-bit Windows and 64 bits long in 64-bit Windows. The pointer-derived data types such as HANDLE and HWND are also different between 32-bit and 64-bit versions. Windows helps you to keep a single code base for both 32-bit and 64-bit software versions by offering polymorphic data types that have a different length depending on the target platform, for example INT_PTR declares an integer with the size of a pointer’. Any variable of this type is an integer which is 32 bits long on a 32-bit platform and 64 bits long on a 64-bit platform.

3. COM initialize
You can only access a COM object from a Windows application the object has been successfully initialized. The COM API function CoInitialize()must be called for each thread that is going to access a COM object before any COM interface calls are made, and the complementary call CoUninitialize() must be performed before the thread exits (see MSDN reference). This rule must be strictly respected if the main process calls to the original 32-bit DLL are multi-threaded.

4. Security
The OOP COM component instantiates COM objects in a separate process, whether a surrogate process, a COM server or Win32 service. This can mean that calls to the 32-bit DLL may happen in a different security context to the main process, especially if the main process makes intensive use of impersonation. If this is the case you may want to configure dedicated credentials for the OOP component, or implement internal impersonation in the COM object.

5. Performance
The IPC-based solution is almost certain to be slower than making direct calls into the DLL. Data marshaling over process boundaries, automatic data conversion between 32 and 64 bits, WOW64 features and delays while instantiating the COM object will all impact performance. However
there are numerous optimizing techniques that you can use such as COM pooling, caching inside the wrapper DLL, chunky’ versus chatty’ calls over process boundaries, implementing performance critical interfaces directly in the 64-bit DLL, and so forth.

6. Redirection
The WOW64 subsystem is in charge of supporting 32-bit modules on 64-bit Windows. To avoid unwanted collisions between 32-bit and 64-bit software, particularly when accessing the file system and registry, WOW64 isolates 32-bit modules using a process called redirection’ (see MSDN reference). For example, for a 64-bit process the call to obtain the system folder pathname returns %WINDOWS%\System32, but for a 32-bit process it returns %WINDOWS%\SysWOW64. The program folder path for a 64-bit process is Program Files’, but for 32-bit process it is Program Files (x86)’. The registry key HKEY_LOCAL_MACHINE\Software contains 64-bit process settings and data, while the key HKEY_LOCAL_MACHINE\Software\WOW6432Node contains 32-bit process settings and data.

This redirection is activated automatically when software modules call to popular pre-defined system paths or registry keys.

7. Kernel modules
The solution proposed here is for 32-bit user level DLLs, and doesn’t work with 32-bit drivers. This is because 32-bit kernel modules cannot be used on a 64-bit platform, with no exceptions or workarounds. If your product includes any kernel level module, such as a device driver, then the only possible migration route is to re-write the kernel module as a 64-bit process.

8. Setup
Using a COM OOP component in place of direct calls to a DLL requires changes to your setup procedure as the COM components must be installed and registered by the system. As discussed under ‘Security’ above, this may involve configuring dedicated credentials for the COM

Read More →



October 3, 2013 | Posted in SQL Server

Service broker not working for you? Try altering this setting in your database, it may save you a lot of time!

Guidelines for using the TRUSTWORTHY database setting in SQL Server


You can use the TRUSTWORTHY database setting to indicate whether the instance of Microsoft SQL Server trusts the database and the contents within the database. By default, this setting is set to OFF. However, you can set it to ON by using the ALTER DATABASE statement. We recommend that you leave this setting set to OFF to mitigate certain threats that may be present when a database is attached to the server and the following conditions are true:

  • The database contains malicious assemblies that have an EXTERNAL_ACCESS or UNSAFE permission setting. For more information, visit the following Microsoft Developer Network (MSDN) website:

    CLR Integration Security

  • The database contains malicious modules that are defined to execute as users that are members of a group that has administrative credentails. For more information, visit the following MSDN website:

    Extending Database Impersonation by Using EXECUTE AS

Note By default, the TRUSTWORTHY setting is set to ON for the MSDB database. Altering this setting from its default value can result in unexpected behavior by SQL Server components that use the MSDB database.

If the TRUSTWORTHY setting is set to ON, and if the owner of the database is a member of a group that has administrative credentials, such as the sysadmin group, the database owner may be able to create and to run unsafe assemblies that can compromise the instance of SQL Server.

More Information

In an Internet service provider (ISP) environment (for example, in a web-hosting service), each customer is permitted to manage their own database and is restricted from accessing system databases and other user databases. For example, the databases of two competing companies could be hosted by the same ISP and exist in the same instance of SQL Server. Dangerous code could be added to a user database when the database was attached to its original instance, and the code would be enabled on the ISP instance when the database was deployed. This situation makes controlling cross-database access crucial.
If the same general entity owns and manages each database, it is still not a good practice to establish a “trust relationship” with a database unless an application-specific feature, such as cross-database Service Broker communication, is required. A trust relationship between databases can be established by enabling cross-database ownership chaining or by marking a database as trusted by the instance by using the TRUSTWORTHY property. The is_trustworthy_on column of the sys.databases catalog view indicates whether a database has its TRUSTWORTHY bit set.

Best practices for database ownership and trust include the following:

• Have distinct owners for databases. Not all databases should be owned by the system administrator.
• Limit the number of owners for each database.
• Confer trust selectively.
• Leave the Cross-Database Ownership Chaining setting set to OFF unless multiple databases are deployed at a single unit.
• Migrate usage to selective trust instead of using the TRUSTWORTHY property.

The following code sample can be used to obtain a list of databases that have the TRUSTWORTHY bit ON and whose database owner belongs to the sysadmin server role.

FROM sys.server_principals r
INNER JOIN sys.server_role_members m ON r.principal_id = m.role_principal_id
INNER JOIN sys.server_principals p ON
p.principal_id = m.member_principal_id
inner join sys.databases d on suser_sname(d.owner_sid) =
WHERE is_trustworthy_on = 1 AND NOT IN (‘MSDB’) and r.type = ‘R’ and = N’sysadmin’

You can run the following query to determine the TRUSTWORTHY property of the MSDB database:

select name, TrustWorthySetting =
case is_trustworthy_on
when 1 then ‘TrustWorthy setting is ON for MSDB’
ELSE ‘TrustWorthy setting is OFF for MSDB’
from sys.databases where database_id = 4

If this query shows that the TRUSTWORTHY property is set to OFF, you can run the following query to set the TRUSTWORTHY property to its default value.


The following table provides more information on the products or tools that automatically check for this condition on your instance of SQL Server and the versions of the SQL Server product that the rule is evaluated against.

Rule software Rule title Rule description Product versions against which the rule is evaluated
System Center Advisor SQL Server msdb system database has trustworthy option set to OFF System Center Advisor checks whether the database option Trustworthy is set to OFF for the msdb system database for this instance of SQL Server. If it is set to OFF, then advisor generates an alert. Review the information provided in the “Information Collected” section of the advisor alert and follow the resolutions that are provided in this article. SQL Server 2008
SQL Server 2008 R2
SQL Server 2008 R2 Best Practice Analyzer (SQL Server 2008 R2 BPA) Trustworthy Bit The SQL Server 2008 R2 Best Practice Analyzer (SQL Server 2008 R2 BPA) provides rule to detect when a database has the TRUSTWORTHY property set to ON and whether the database owner is member of the sysadmin server group. If you run the BPA tool and receive a warning that is titled “Engine – Trustworthy Bit,” we recommend that you run the query that is listed in the previous section to identify the database that has the TRUSTWORTHY bit set and that you consider changing the setting to OFF. SQL Server 2008
SQL Server 2008 R2
SQL Server 2012 Best Practice Analyzer (SQL Server 2012 BPA) Trustworthy Bit The SQL Server 2012 Best Practice Analyzer (SQL Server 2012 BPA) provides rule to detect when a database has the TRUSTWORTHY property set to ON and whether the database owner is member of the sysadmin server group. If you run the BPA tool and receive a warning that is titled “Engine – Trustworthy Bit,” we recommend that you run the query that is listed in the previous section to identify the database that has the TRUSTWORTHY bit set and that you consider changing the setting to OFF. SQL Server

Read More →

Monitoring Website Downtime With Google Docs


October 3, 2013 | Posted in Web

Do you have a website? Would you like to receive alerts on your email address as soon as your site goes down or becomes inaccessible to users?

You can either subscribe to one of these website monitoring services to track your site’s uptime (and downtime) or you can build your own site monitor with the help of Google Docs. The one big advantage with Google Docs is that it can be configured to check your website status every minute and thus you’ll instantly know if your site is down. And it’s free.

Website Monitor - Logs

Website Monitoring with Google Docs

It will take a minute to configure Google Docs as an uptime monitor for your website.

  1. Assuming that you are signed-in to your Google Account, click here to make a copy of the Google Docs sheet.
  2. Put your website’s URL in cell E3 and your email address in cell E5. This is the address where you wish to receive notifications for downtime and uptime.
  3. Go to Tools – > Script Editor – > Resources – > Current Script’s Triggers and set a Time-Driven trigger for every minute (or 5 minutes).
  4. Save the Trigger and Google Docs will show a big red warning asking for authorization. Just accept it and Save the trigger again.

That’s it. The Google Docs sheet will monitor your website in the background at all times and will send an email alert whenever it has trouble accessing the website. If the issue is resolved, you’ll get another notification saying “all’s well.”

Also, the downtime details (including the server responses) are logged in the Google Docs sheet itself so you can later analyze the downtime in greater detail. You can browse the source code to understand how monitoring works behind the scenes.

One more thing. If you know a bit of programming, create a new Twitter app specifically for tracking your site’s uptime and Google Docs can then alert you through tweets and SMS text messages if your site is having problems.



Original source:

Read More →

How To Format The UltraFormattedTextEditor Value Using FormattedTextUIEditorForm


October 3, 2013 | Posted in Infragistics

The use and displaying of correctly formatted text in the UltraFormattedTextEditor Infragistics control is somewhat cumbersome, especially when you are dealing with some formatted text that somebody happened to copy and paste from another application such as Microsoft Word. In my case, this was being done in some administration screen, which stored a value that the user later wanted to display in a nicely formatted way; I chose to use the UltraFormattedTextEditor control which seemed the logical path to go down for displaying the required value.

I soon realised that I needed to limit how the formatted text was stored in SQL Server, so I opted to use the FormattedTextUIEditorForm, which is the form you see in design mode when looking at the UltraFormattedTextEditor control. Unfortunately this is not exposed with the out of the box Infragistics products (which is a great shame), you can however get to it if you have their source code. If you have this, you can find it in the SupportDialogs folder, and from there the FormattedTextEditor folder; the class you are interested in is FormattedTextUIEditorForm.cs

It was a bit of a pain to import Infragistics2.Win.SupportDialogs.csproj in to my solution and to get it to compile, but once this was working, I could call it directly like so (which is exactly what I needed to supply the user with to format their text how I needed it to be stored in the administration screen):


Dim result As DialogResult
Dim value As String = String.Empty

Using formattedTextDialog As New FormattedTextUIEditorForm()

If (Me.DescriptionTextbox.Text.Trim().Length > 0) Then
formattedTextDialog.Value = Me.DescriptionTextbox.Text
End If

result = formattedTextDialog.ShowDialog()

value = CStr(formattedTextDialog.Value)

End Using

If (result = DialogResult.OK AndAlso value.Length > 0) Then
Me.DescriptionTextbox.Text = value
End If

Catch ex As Exception
‘handle exception
End Try


I needed to capture the value string (which is actually the raw text value) and store this in SQL Server, which would later get served up to the UltraFormattedTextEditor control. I did have an issue finding all of the required icons, I seemed to find them all except a few, including left align, right align etc. If you’ve used this in design view, you’ll know that you get the value tab which shows the text as you want it displayed, and also the raw text tab which is the formatted text, including the required html tags that you need to display in the UltraFormattedTextEditor control. I’ve included two screen shots showing my implementation; if it helps somebody else trying to overcome the same hurdle as me, then it was worth putting this article together:

Infragistics have since suggested that another solution would be to use the “Ribbon with Formatted Text Editor” from their installed samples. I’ve checked it out and it does actually do what I needed it to do, so this may be a simpler approach to achieving your goal, however I still like that fact that the FormattedTextUIEditorForm shows the raw text value on the fly.  See the screenshot below:

Read More →

Win32 API user32.dll – SendMessage() Function


October 3, 2013 | Posted in .Net

“user32.dll implements the Windows USER component that creates and manipulates the standard elements of the Windows user interface, such as the desktop, windows, and menus. It thus enables programs to implement a graphical user interface that matches the Windows look and feel. Programs call functions from Windows USER to perform operations such as creating and managing windows, receiving window messages (which are mostly user input such as mouse and keyboard events, but also notifications from the operating system), displaying text in a window, and displaying message boxes.

Many of the functions in User32.dll call upon GDI functions exported by Gdi32.dll to do the actual rendering of the various elements of the user interface. Some types of programs will also call GDI functions directly to perform lower-level drawing operations within a window previously created via User32 functions.”

I needed to suspend the mdi from painting whilst displaying child forms in an application. The following code illustrates how this can be achieved:


public static extern int SendMessage(IntPtr hWnd, Int32 wMsg, bool wParam, Int32 lParam);
private const int WM_SETREDRAW = 11;

private void DoSomething()
// suspend drawing
SendMessage(this.Handle, WM_SETREDRAW, false, 0);

// do something here

// resume drawing
SendMessage(this.Handle, WM_SETREDRAW, true, 0);

// repaint client area

For more information on SendMessage(), see

Read More →

The perils of GC.Collect (or when to use GC.Collect)


October 3, 2013 | Posted in .Net

An interesting article (linked to other interesting articles) regarding garbage collection, and when to force the collector to free memory when you’re seeing OOM exceptions.

Read More →

Where’s the hash key on a Mac?


October 3, 2013 | Posted in Mac

You’ll have noticed that the # character is missing on the Macbook Pro (mine is mid 2012).

Yes it’s not on the keyboard mapping which is a bit weird and frustrating, anyway you can simply press ALT+3 to get the required character.

If like me you are running Windows in Parallels Desktop 8 for Mac, the key combination is CTRL+ALT+3.


Read More →

Exchanging Data Between Different Parallels Desktop For Mac Virtual Machines


October 3, 2013 | Posted in Mac

Parallels Explorer multiwindow mode makes it easy to manage the contents of different virtual machines. Now, you can copy data from one virtual machine to another by simply dragging it. Since Parallels Explorer supports not only Parallels file formats, you can also exchange data between Parallels virtual machines and third-party virtual machines.

Note: If you copy files to a Windows virtual machine using Parallels Explorer or Parallels Mounter, you need to log in to Windows as an administrator to be able to open them inside the virtual machine.

To copy a file from a third-party virtual machine to a Parallels virtual machine

  1. Open Parallels Explorer.
  2. Click the Virtual Machines icon in the sidebar.
  3. Choose the third-party virtual machine where the file will be placed in the working area. The list of volumes used by the virtual machine appears.
  4. Choose the volume where the file is stored.
  5. Choose Open New Window from the File menu. One more Parallels Explorer window opens.
  6. In the working area, choose the Parallels virtual machine that will store the file. The list of volumes used by the virtual machine appears.
  7. Choose the volume where the copied file will be placed.
  8. Open the destination folder.
  9. To copy the file, drag it from the third-party virtual machine window to the Parallels virtual machine window.

To copy a file from one Parallels virtual machine to another Parallels virtual machine

  1. Open Parallels Explorer.
  2. Click the Virtual Machines icon in the side bar.
  3. In the working area, choose the virtual machine (for example, Windows XP) that stores the file. The list of volumes used by the virtual machine appears.
  4. Choose the volume where the file is stored.
  5. Choose Open New Window from the File menu.
  6. In a new Parallels Explorer window, click the Virtual Machines icon in the sidebar.
  7. Choose the virtual machine where the file will be placed (for example, Windows 98) in the working area. The list of volumes used by the virtual machine appears.
  8. Choose the volume where the copied file will be placed.
  9. Open the destination folder.
  10. To copy the file, drag it from the Windows XP virtual machine window to the Windows 98 virtual machine window.

Read More →

Higgs boson-like particle discovery claimed at LHC


October 3, 2013 | Posted in Science

The moment when Cern director Rolf Heuer confirmed the Higgs results

Cern scientists reporting from the Large Hadron Collider (LHC) have claimed the discovery of a new particle consistent with the Higgs boson.

The particle has been the subject of a 45-year hunt to explain how matter attains its mass.

Both of the Higgs boson-hunting experiments at the LHC see a level of certainty in their data worthy of a “discovery”.

More work will be needed to be certain that what they see is a Higgs, however.

Prof Stephen Hawking tells the BBC’s Pallab Ghosh the discovery has cost him $100

The results announced at Cern (European Organization for Nuclear Research), home of the LHC in Geneva, were met with loud applause and cheering.

Prof Peter Higgs, after whom the particle is named, wiped a tear from his eye as the teams finished their presentations in the Cern auditorium.

“I would like to add my congratulations to everyone involved in this achievement,” he added later.

“It’s really an incredible thing that it’s happened in my lifetime.”

The CMS team claimed they had seen a “bump” in their data corresponding to a particle weighing in at 125.3 gigaelectronvolts (GeV) – about 133 times heavier than the protons that lie at the heart of every atom.

“Start Quote

We’re reaching into the fabric of the Universe at a level we’ve never done before”

Prof Joe Incandela CMS spokesman

They claimed that by combining two data sets, they had attained a confidence level just at the “five-sigma” point – about a one-in-3.5 million chance that the signal they see would appear if there were no Higgs particle.

However, a full combination of the CMS data brings that number just back to 4.9 sigma – a one-in-two million chance.

Prof Joe Incandela, spokesman for the CMS, was unequivocal: “The results are preliminary but the five-sigma signal at around 125 GeV we’re seeing is dramatic. This is indeed a new particle,” he told the Geneva meeting.

Atlas results were even more promising, at a slightly higher mass: “We observe in our data clear signs of a new particle, at the level of five sigma, in the mass region around 126 GeV,” said Dr Fabiola Gianotti, spokeswoman for the Atlas experiment at the LHC.

Peter Higgs Peter Higgs joined three of the six theoreticians who first predicted the Higgs at the conference

Prof Rolf Heuer, director-general of Cern, commented: “As a layman I would now say I think we have it.”

“We have a discovery – we have observed a new particle consistent with a Higgs boson. But which one? That remains open.

“It is a historic milestone but it is only the beginning.”

Commenting on the emotions of the scientists involved in the discovery, Prof Incandela said: “It didn’t really hit me emotionally until today because we have to be so focussed… but I’m super-proud.”

Dr Gianotti echoed his thoughts, adding: “The last few days have been extremely intense, full of work, lots of emotions.”

A confirmation that this is the Higgs boson would be one of the biggest scientific discoveries of the century; the hunt for the Higgs has been compared by some physicists to the Apollo programme that reached the Moon in the 1960s.

Statistics of a ‘discovery’

Swiss franc coin
  • Particle physics has an accepted definition for a discovery: a “five-sigma” (or five standard-deviation) level of certainty
  • The number of sigmas measures how unlikely it is to get a certain experimental result as a matter of chance rather than due to a real effect
  • Similarly, tossing a coin and getting a number of heads in a row may just be chance, rather than a sign of a “loaded” coin
  • A “three-sigma” level represents about the same likelihood as tossing eight heads in a row
  • Five sigma, on the other hand, would correspond to tossing more than 20 in a row
  • Independent confirmation by other experiments turns five-sigma findings into accepted discoveries

Scientists would then have to assess whether the particle they see behaves like the version of the Higgs particle predicted by the Standard Model, the current best theory to explain how the Universe works. However, it might also be something more exotic.

All the matter we can see appears to comprise just 4% of the Universe, the rest being made up by mysterious dark matter and dark energy.

A more exotic version of the Higgs could be a bridge to understanding the 96% of the Universe that remains obscure.

Scientists will have to look at how the Higgs decays – or transforms – into other, more stable particles after being produced in collisions at the LHC.

Dr Pippa Wells, a member of the Atlas experiment, said that several of the decay paths already showed deviations from what one would expect of the Standard Model Higgs.

For example, a decay path where the Higgs transforms into two photon particles was “a bit on the high side”, she explained.

These could get back into line as more statistics are added, but on the other hand, they may not.

“We’re reaching into the fabric of the Universe at a level we’ve never done before,” said Prof Incandela.

“We’re on the frontier now, on the edge of a new exploration. This could be the only part of the story that’s left, or we could open a whole new realm of discovery.”


The Standard Model and the Higgs boson

Standard model

• The Standard Model is the simplest set of ingredients – elementary particles – needed to make up the world we see in the heavens and in the laboratory

• Quarks combine together to make, for example, the proton and neutron – which make up the nuclei of atoms today – though more exotic combinations were around in the Universe’s early days

 Leptons come in charged and uncharged versions; electrons – the most familiar charged lepton – together with quarks make up all the matter we can see; the uncharged leptons are neutrinos, which rarely interact with matter

• The “force carriers” are particles whose movements are observed as familiar forces such as those behind electricity and light (electromagnetism) and radioactive decay (the weak nuclear force)

• The Higgs boson came about because although the Standard Model holds together neatly, nothing requires the particles to have mass; for a fuller theory, the Higgs – or something else – must fill in that gap


By Paul Rincon Science editor, BBC News website, Geneva

Read More →