Team Foundation Service Burndown/Reports Aren’t Updating

team_foundation_server_logo

October 3, 2013 | Posted in Team Foundation Server

If your TFS burndown chart and team reports such as Sprint Cumulative Flow aren’t updating, the likely hood is that the The Team Foundation Server Task Scheduler Service isn’t running on your Team Foundation Server. Under windows services (from admin tools or manage computer) look for Visual Studio Team Foundation Service Task Scheduler; start the service if it isn’t already running.

If it doesn’t update your burndown or reports straight away, there is a way to probe it’s current status and to run the scheduled tasks manually by using the Warehousecontroller. The windows service above is generally located under C:\Program Files\Microsoft Visual Studio 2008 Team Foundation Server\TFSServerScheduler which is where you’ll find a file called ScheduledServices.xml. In here you will find the warehouse controller URL (run this in a web browser) which can be used to probe the web services state along with other such queries.

http://servername:8080/Warehouse/v1.0/WarehouseController.asmx

The two that you probably want to use are called:

GetWarehouseStatus

<WarehouseStatus xmlns=”http://schemas.microsoft.com/TeamFoundation/2005/06/Services/Controller/03″>Idle</WarehouseStatus>

Run

<boolean xmlns=”http://schemas.microsoft.com/TeamFoundation/2005/06/Services/Controller/03″>true</boolean>

The latter will run the scheduled tasks (give it a few minutes and you should see that your report data and burndown charts have sprung to life).

For more information see http://msdn.microsoft.com/en-us/library/ms252450(v=vs.80).aspx

Read More →

How to select a value from an XML field in SQL Server

sql_server_logo

October 3, 2013 | Posted in SQL Server

Need to select some values quickly from an XML field in SQL Server?

select
Field.value(‘(/OuterNode//InnerNode/node())[1]‘, ‘varchar(50)’) as Value
from [schema].tblTemp with (nolock)

Read More →

Be agile!

scrum_logo

October 3, 2013 | Posted in Agile

Read More →

Why can’t I add or move columns in SSMS designer view?

sql_server_logo

October 3, 2013 | Posted in SQL Server

SSMS is by default quite picky about what you can add or move without dropping and recreating a table. You can switch off this default behavior by unchecking the following option:

Read More →

How can I put a Service Broker message back onto the queue, if my processing fails?

sql_server_logo

October 3, 2013 | Posted in SQL Server

One of the questions that I had in my mind, when I started working with service broker is about the safety of my messages. Service broker is a reliable messaging platform. It is capable of delivering the message to the target without fail. But what if the target received the message but could not process it?

When a queue reader program reads a message from the queue with the RECEIVE command, the message will be removed from the queue. If the RETENTION setting of the queue is set to ON, the message is not really removed from the queue, but the status is updated to 0 to indicate that the message is read and processed. (We discussed RETENTION in an earlier post). In either case, we will not be able to receive a message for a second time.

To make sure that the messages are reliably processed, we need to put the RECEIVE command in a transaction. The processing of the message should be within the transaction and if an error occurs, the transaction should ROLLBACK. When the transaction in which the RECEIVE operation is executed, is rolled back, the message will not be removed (or updated to ‘read’) from the queue. The next RECEIVE command will fetch the message once again. The following code sample explains this.

-- local variables
DECLARE    @XML XML, 
        @MessageBody VARBINARY(MAX), 
        @MessageTypeName SYSNAME, 
        @ConversationHandle UNIQUEIDENTIFIER,
        @Response XML

BEGIN TRY
    WHILE (1 = 1)
    BEGIN
        -- start the transaction
        BEGIN TRANSACTION

        -- do a receive
        WAITFOR (
            RECEIVE TOP(1)
                @MessageTypeName = message_type_name,
                @MessageBody = message_body,
                @ConversationHandle = [conversation_handle]
                FROM dbo.QueueName
        ), TIMEOUT 60000

        -- no messages? let us quit
        IF (@@ROWCOUNT = 0)
        BEGIN
            ROLLBACK TRANSACTION
            BREAK
        END 

        -- cast the messagebody to XML
        SET @XML = CAST(@MessageBody AS XML);

        -- check if it is the message you are looking for                
        IF @MessageTypeName = 'The-message-you-are-looking-for'
        BEGIN
            -- execute the stored procedure to process the message
            EXECUTE YourProcedureThatProcessTheMessage @XML

            -- construct a response message
            SELECT @Response = '<construct-a-response-message/>'

            -- send a response back to the caller
            SEND ON CONVERSATION @ConversationHandle
                MESSAGE TYPE [Your-response-message-type](@response);

            -- end the conversation, if you dont need it anymore
            END CONVERSATION @ConversationHandle;
        END

        -- commit the transaction
        -- the message will be removed from the queue
        COMMIT TRANSACTION            
    END
END TRY
BEGIN CATCH
    -- roll back the transaction
    -- the message will not be removed from the queue
    ROLLBACK TRANSACTION
END CATCH

 

Please note that this could create poison messages. When you rollback the transaction and do a RECEIVE, the same message will be fetched again. It could happen that the error can occur again and the transaction will rollback again. If errors like primary-key-violation occurs, every time the message is processed the error will keep occurring. This means we will end up with an infinite loop of RECEIVE, ERROR and ROLLBACKS. I will explain poison messages in the next post.

Tags: Service Broker

Jul 31 2008 3:20PM by Jacob Sebastian

Original Source: http://beyondrelational.com/modules/2/blogs/28/posts/10454/service-broker-how-can-i-put-a-message-back-into-the-queue-if-my-processing-fails.aspx

Read More →

Debugging a SQL Server non-yielding scheduler issue

sql_server_logo

October 3, 2013 | Posted in SQL Server

If you have read the How To Diagnose and Correct Errors 17883, 17884, 17887, and 17888 whitepaper, you would know what a 17883 error reported in SQL Server 2000 means. If you read the “SQL Server 2005 – SchedulerMonitor” section of the whitepaper, you will understand that the way non-yielding messages are reported for a SQL Server worker thread is much more refined than it’s predecessor. We report a non-yielding worker thread as a Non-yielding Scheduler message in the SQL Server Errorlog. Since the whitepaper is very comprehensive in explaining SQL Server non-yielding detection algorithms, I shall not reiterate the same. In this post, I shall show you how to track down the cause of a non-yielding scheduler using public symbols in conjunction with the SQL Server Errorlog and Windows Event Logs.

Below is an excerpt from an Errorlog which is reporting a non-yielding scheduler message.

2011-07-24 22:17:45.850 Server       * BEGIN STACK DUMP:
2011-07-24 22:17:45.850 Server       *   07/24/11 22:17:45 spid 2388
2011-07-24 22:17:45.850 Server       *
2011-07-24 22:17:45.850 Server       * Non-yielding Scheduler
2011-07-24 22:17:45.850 Server       Stack Signature for the dump is 0x00000000000003B6
2011-07-24 22:17:53.380 Server       External dump process return code 0×20000001.
External dump process returned no errors.

2011-07-24 22:17:53.400 Server Process 0:0:0 (0×990) Worker 0x0000000005A6C1A0 appears to be non-yielding on Scheduler 1. Thread creation time: 12955600342903. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 0%. System Idle 99%. Interval: 70119 ms.

There are lines that I have colored differently in the above excerpt whose significance I shall detail later in this thread. The Windows Application Event Log will report this as an Informational message with the EventID =17883. This is what I have in my application event log for the above reported non-yielding condition:

Process 0:0:0 (0×990) Worker 0x0000000005A6C1A0 appears to be non-yielding on Scheduler 1. Thread creation time: 12955600342903. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 0%%. System Idle 99%%. Interval: 70119 ms.

The above message is the same message reported in the Errorlog excerpt above. Since the Errorlog doesn’t report the MDMP (dump file) number, you will have to track it down using the timestamp when the issue was reported or using the SQL Dumper log in case you are performing post-mortem analysis. The SQL Dumper log name is “SQLDUMPER_ERRORLOG.log” and it is located by default in the SQL Server LOG folder. You will find a message similar to the one shown below which will point you to the correct .mdmp file.

(2A30:2760) 07/24/11 22:17:52, ACTION, SQLDUMPER_UNKNOWN_APP.EXE, MiniDump completed: C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\LOG\SQLDump0025.mdmp

Now I need to use Windows Debugging Tools to load the .mdmp file. Note that the non-yielding message in the SQL Server Errorlog reported “External dump process returned no errors.”. This means that the dump generation routine didn’t encounter any errors. The most common error that occurs while generating dumps (due to resource crunch – memory or CPU), is a timeout. This means that the dump generation timed out and it is a delayed dump. So the information contained in the dump file may or may not be useful to perform a complete post-mortem analysis. This is when Microsoft CSS would ask you to collect additional data for the next occurrence if the current data available is not sufficient for a complete post-mortem analysis.

So let’s move on to the good part now… i.e. analysis of the dump. Keep in mind that this is a mini-dump which means that the is limited amount of information stored in the dump and we are using public symbols to analyze the dump. Even with private symbols that Microsoft CSS Engineers have access to, there might not be a whole lot they can decipher from a mini-dump and might request additional data!

When you load the dump using WinDBG, you will see the following information:

Loading Dump File [C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log\SQLDump0025.mdmp]
User Mini Dump File: Only registers, stack and portions of memory are available

Comment: ‘Stack Trace’
Comment: ‘Non-yielding Scheduler’


Debug session time: Sun Jul 24 22:17:52.000 2011 (UTC + 5:30)

The above tells you that this is a mini-dump for a non-yielding scheduler condition and the location from where you loaded the dump. Then I use the command to set my symbol path and direct the symbols downloaded from the Microsoft symbol server to a local symbol file cache on my machine. It also tells me the time when the dump was generated.

What is a minidump? 
A memory dump which contains the memory for the call stack of all threads, the CPU registers and information about which modules are loaded.

.sympath srv*D:\PublicSymbols*http://msdl.microsoft.com/download/symbols

Then I issue a reload command to load the symbols for sqlservr.exe. This can also be done using CTRL+L and providing the complete string above (without .sympath), checking the Reload checkbox and clicking on OK. The only difference here is that the all the public symbols for all loaded modules in the dump will be downloaded from the Microsoft Symbol Server which are available.

.reload /f sqlservr.exe

Next thing is to verify that the symbols were correctly loaded using the lmvm sqlservr command. If the symbols were loaded correctly, you should see the following output. Note the text in green.

0:019> lmvm sqlservr
start             end                 module name
00000000`00c60000 00000000`04820000   sqlservr   (pdb symbols)          D:\publicsymbols\sqlservr.pdb\6A3527657A3F4090AB2C96F7008A55C52\sqlservr.pdb
Loaded symbol image file: sqlservr.exe
Mapped memory image file: C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlservr.exe
Image path: C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlservr.exe
Product version:  10.50.1790.0

If symbols were not loaded, then you will see an output as shown below.

0:019> lmvm sqlservr
start             end                 module name
00000000`00c60000 00000000`04820000   sqlservr   (export symbols)       sqlservr.exe
Loaded symbol image file: sqlservr.exe
Mapped memory image file: C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlservr.exe
Image path: C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn\sqlservr.exe

Next I shall use the Thread ID noted above (0×990) in the Errorlog excerpt to switch to the thread and get the callstack for the thread which was reported as non-yielding.

0:019> ~~[0x990]s
ntdll!NtWaitForSingleObject+0xa:
00000000`76e06bda c3              ret
0:019> kL100

ntdll!NtWaitForSingleObject
kernel32!WaitForSingleObjectEx
sqlservr!SOS_Scheduler::SwitchContext 
sqlservr!SOS_Scheduler::SuspendNonPreemptive
sqlservr!SOS_Scheduler::Suspend


msvcr80!endthreadex
msvcr80!endthreadex
kernel32!BaseThreadInitThunk
ntdll!RtlUserThreadStart

Looking at the callstack above, I can say that this is not the cause of the non-yielding condition (99.99999% true in most cases)!! The reason I can say this is because I find SwitchContext on the thread stack which means that the thread has yielded. The sequence of events is as follows: SQL Server detected a non-yielding worker thread. By the time, SQL Dumper was initiated to capture a memory dump of the SQL Server process, the thread yielded and the processing moved on to the next work request in the queue.

However, if your dump didn’t have the top functions as the ones noted above but was something like the two functions mentioned below, then you definitely have the right callstack and the issue noted below is due to stuck-stalled I/O.

ntdll!NtWriteFileGather
kernel32!WriteFileGather

The non-yielding scheduler message from the dump I investigated which had the above callstack had the following stuck-stalled IO warnings in the SQL Server Errorlog just prior to the dump getting generated:

SQL Server has encountered 218 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\tempdb.mdf] in database [tempdb] (2).

So now I definitely know that the non-yielding condition was due to faulty disk sub-system and the SQL Server instance at this point is just a victim of the issue!

If you see the the following message in the SQL Server Errorlog along with the dump generation message, then it is definitely a delayed dump and it could have some valuable information missing that might have been pertinent to your analysis.

Timeout while waiting for external dump

Most of the common non-yielding conditions are documented in the whitepaper that I referred to above. If you are unable to get a callstack for the non-yielding issue and the symptoms or sequence of events leading up to the issue doesn’t match any KB Article, then it’s time to call CSS to get the memory dump analyzed with the help of the private symbols that the CSS engineers have access to.

Happy debugging!!

Reference:
KB 315263 – How to read the small memory dump files that Windows creates for debugging

Original source: http://troubleshootingsql.com/2011/08/13/debugging-a-non-yielding-scheduler-issue/

Read More →

Finding Filtered Rows In The UltraWinGrid.RowsCollection

infragistics_logo

October 3, 2013 | Posted in Infragistics

Need to find out how many rows have been filtered having applied a column filter?

UltraWinGrid.RowsCollection.FilteredInRowCount()

Simply query the above property which will tell you how many rows there are as a result of the filter.

Hook in to the AfterRowFilterChanged event to perform the required action having changed the filter.

Read More →

Parsing/Displaying XML with XmlReader and XmlWriter

net_logo

October 3, 2013 | Posted in .Net

Imports System.Drawing
Imports System.IO
Imports System.Text
Imports System.Windows.Forms
Imports System.Xml

 

Public NotInheritable Class XMLViewer

Public Shared Sub ViewXML(ByVal parentForm As Form, ByVal xml As String)

If ((parentForm IsNot Nothing) AndAlso (Not String.IsNullOrEmpty(xml))) Then

Dim xmlDisplay As New System.Windows.Forms.Form
Dim brokerXML As New RichTextBox()
Dim output As StringBuilder = New StringBuilder()
Dim ws As XmlWriterSettings = New XmlWriterSettings()

Dim formSize As New Size(CInt(parentForm.Width / 2), (parentForm.Height – CInt(parentForm.Height / 6)))

brokerXML.Multiline = True
brokerXML.Dock = DockStyle.Fill

xmlDisplay.Size = formSize
xmlDisplay.MinimumSize = formSize
xmlDisplay.Text = “XML Viewer”
xmlDisplay.StartPosition = FormStartPosition.CenterParent
xmlDisplay.Controls.Add(brokerXML)

ws.Indent = True

Try
Using reader As XmlReader = XmlReader.Create(New StringReader(xml))

Using writer As XmlWriter = XmlWriter.Create(output, ws)

While reader.Read()

Select Case reader.NodeType
Case XmlNodeType.Element
writer.WriteStartElement(reader.Name)
Case XmlNodeType.Text
writer.WriteString(reader.Value)
Case XmlNodeType.XmlDeclaration
Case XmlNodeType.ProcessingInstruction
writer.WriteProcessingInstruction(reader.Name, reader.Value)
Case XmlNodeType.Comment
writer.WriteComment(reader.Value)
Case XmlNodeType.EndElement
writer.WriteFullEndElement()
End Select

End While

End Using

End Using

brokerXML.Text = output.ToString()

xmlDisplay.ShowDialog()

Catch
Throw
Finally
If (xml IsNot Nothing) Then
xml = Nothing
End If

If (brokerXML IsNot Nothing) Then
brokerXML.Dispose()
End If

If (output IsNot Nothing) Then
output = Nothing
End If

If (ws IsNot Nothing) Then
ws = Nothing
End If

If (xmlDisplay IsNot Nothing) Then
xmlDisplay.Dispose()
xmlDisplay = Nothing
End If
End Try

End If

End Sub

End Class

Read More →

Accessing 32-bit DLLs from 64-bit code

net_logo

October 3, 2013 | Posted in .Net

Migrating your 32-bit Windows application to a 64-bit machine can be problematic if you have 32-bit DLLs that you cannot re-write. Mike Becker shows you how you can access 32-bit DLLs from 64-bit code using built-in IPC mechanisms.

Originally published on DNJ Online, June 2007.

Microsoft’s 64-bit technology first appeared with Windows Server 2003 for Itanium 2 (also known as IA64 Architecture) and for eXtended technology CPUs (also known as x64 Architecture). It offers many advantages but also raises new issues for the software developer. For example, you may still need to access existing 32-bit DLLs from a 64-bit process.

A key advantage of 64-bit technology is its ability to address up to 8Tb of memory, against a maximum of 2Gb for 32-bit processes. As a result, 64-bit technology allows most data processing to take place in memory, without any need for temporary disk storage. This can considerably increase performance and open up new data processing scenarios. There are therefore good arguments for migrating current 32-bit software products to a 64-bit platform.

Many C or C++ applications are easy to migrated to a 64-bit platform, particularly if they are written in a monolithic fashion. Sometimes they just need to be rebuilt with an x64/IA64 compiler to run as native 64-bit applications. However distributed or module-based software can cause more problems.

 

The conflict: 64-bit versus 32-bit

A major migration issue concerns 32-bit software components which cannot be migrated, perhaps because the source code is lost or one of the dependencies cannot be migrated.

Your 32-bit software is still supported on a 64-bit platform as 32-bit processes can be executed inside the dedicated Windows on Windows’ (WOW64) subsystem which is part of all 64-bit Windows operating systems. However a 64-bit process cannot load a 32-bit module into its process space, and a 32-bit processes cannot load a 64-bit module into its process space. The only way that communication can happen between 32-bit and 64-bit modules is through interprocess communication (IPC). In other words, 32-bit and 64-bit processes can exchange data using IPC techniques such as out-of-process COM, sockets, Windows messages or memory mapped files.

A 32-bit software product contains the main module WeatherReport which calls into the DLL WeatherStationControl. As long as both the main module and the DLL are 32-bit processes the product can run on both 32-bit and 64-bit platforms (inside WOW64). If both the main module and the DLL are migrated to the 64-bit platform, then they can both run in a native 64-bit 
process. However if only the main module is migrated to 64-bit, it will not be able to load the 32-bit DLL.

The best way to migrate such a product to a 64-bit platform is to migrate both the main module and the dependency DLL, but if the dependency DLL cannot be migrated then it cannot be loaded into the 64-bit process and the application won’t work.

The solution: a surrogate process

This issue can be solved by loading the dependency DLL into a separate 32-bit process space. The main module, running as a 64-bit process, can then access the dependency DLL across the process boundary using IPC (see MSDN reference).

A 64-bit process can access a 32-bit DLL across a process boundary if the 32-bit DLL is loaded into a separate 32-bit surrogate process space, and the application makes use of the built-in IPC mechanisms that support data exchange between 32-bit and 64-bit processes.

This solution requires additional work as the 32-bit surrogate process that loads the 32-bit DLL and exposes its API must be created. Also, some changes will be necessary on the 64-bit side as the consumer must use one of the IPC techniques instead of directly accessing the 32-bit DLL. It is worth noting that, in extreme cases, this additional work could be comparable to the work involved in developing a 64-bit version of the 32-bit DLL from scratch.

One possible way of reducing these costs is to implement a 64-bit wrapper’ DLL that exposes the same functions, parameters, types and so forth as the original 32-bit DLL. This wrapper DLL can then make IPC-based calls to the original 32-bit DLL, which has been loaded into a surrogate process.

A 64-bit wrapper DLL (WeatherStationControl64.DLL) exports the same interface as the original 32-bit DLL (WeatherStationControl.DLL), so providing the same services to the main process 
(WeatherReport) without you needing to make any changes to the code of either the main process or the 32-bit DLL. This wrapper DLL delegates calls to the 32-bit DLL, which is running in a surrogate process, using IPC.

The main costs of this solution arise from implementing the surrogate process, loading the 32-bit DLL and implementing the 64-bit wrapper DLL. The actual cost depends on the IPC technique used to exchange data between the 64-bit and 32-bit processes.

COM as an IPC mechanism

One of most popular IPC techniques is DCOM (Distributed COM). Originally designed for distributed systems, DCOM is still supported on 64-bit Windows platforms, so both 32-bit and 64-bit COM modules can be built. The only limitation is that 64-bit and 32-bit modules cannot reside in the same process space, so they have to interoperate across process boundaries. This is done using out-of-process’ (OOP) COM components, in the following way:

  1. Create a 32-bit component implementing a COM object which loads and calls
    into the 32-bit DLL, and exposes the 32-bit DLL interface as a COM interface.
  2. Configure this COM components for OOP by either creating a standard COM+ OOP application (using dllhost as the surrogate process), or by implementing the COM component as a dedicated COM server process using, for example, an ATL COM
    server as hosting process or a Win32 service as a dedicated COM server.
  3. Create a 64-bit wrapper DLL which implements the same interface as the
    original 32-bit DLL, imports the COM interface of the COM object created above,
    translates current calls to the exposed interface into calls to the COM object interface, transfers the call parameters, receives return values and delegates
    them to the callers.

The 32-bit DLL (WeatherStationControl.DLL) is used by a COM object (WeatherStationWrapper) which exposes the interface of the 32-bit DLL as a COM interface. The 64-bit wrapper DLL (WeatherStationControl64.DLL) makes calls to this interface which are delegated to the original 32-bit DLL. The main process (WeatherReport) calls the interface exposed by the 64-bit wrapper DLL but is in fact served by the original 32-bit DLL.

This solution should be significantly less expensive than creating a 64-bit version of the 32-bit DLL from scratch. Microsoft’s ATL technology is supported by Visual Studio with wizards and ready-written code fragments which should also help lower migration costs by saving time and reducing the likelihood of errors.

Implications

There are, however, a number of things that you still need to keep in mind:

1. Alignment
The alignment of data in memory is different for 32-bit and 64-bit processes. This means that your more complicated custom data structures may be serialized by a 32-bit process in a way that is different to that expected by a 64-bit process, and vice versa. Microsoft Windows Platform
SDK includes documentation about the differences in memory data alignment between 32-bit and 64-bit processes.

2. Data types
In most instances, 64-bit Windows uses the same data types as the 32-bit version. The differences are mainly in pointers which are 32 bits long in 32-bit Windows and 64 bits long in 64-bit Windows. The pointer-derived data types such as HANDLE and HWND are also different between 32-bit and 64-bit versions. Windows helps you to keep a single code base for both 32-bit and 64-bit software versions by offering polymorphic data types that have a different length depending on the target platform, for example INT_PTR declares an integer with the size of a pointer’. Any variable of this type is an integer which is 32 bits long on a 32-bit platform and 64 bits long on a 64-bit platform.

3. COM initialize
You can only access a COM object from a Windows application the object has been successfully initialized. The COM API function CoInitialize()must be called for each thread that is going to access a COM object before any COM interface calls are made, and the complementary call CoUninitialize() must be performed before the thread exits (see MSDN reference). This rule must be strictly respected if the main process calls to the original 32-bit DLL are multi-threaded.

4. Security
The OOP COM component instantiates COM objects in a separate process, whether a surrogate process, a COM server or Win32 service. This can mean that calls to the 32-bit DLL may happen in a different security context to the main process, especially if the main process makes intensive use of impersonation. If this is the case you may want to configure dedicated credentials for the OOP component, or implement internal impersonation in the COM object.

5. Performance
The IPC-based solution is almost certain to be slower than making direct calls into the DLL. Data marshaling over process boundaries, automatic data conversion between 32 and 64 bits, WOW64 features and delays while instantiating the COM object will all impact performance. However
there are numerous optimizing techniques that you can use such as COM pooling, caching inside the wrapper DLL, chunky’ versus chatty’ calls over process boundaries, implementing performance critical interfaces directly in the 64-bit DLL, and so forth.

6. Redirection
The WOW64 subsystem is in charge of supporting 32-bit modules on 64-bit Windows. To avoid unwanted collisions between 32-bit and 64-bit software, particularly when accessing the file system and registry, WOW64 isolates 32-bit modules using a process called redirection’ (see MSDN reference). For example, for a 64-bit process the call to obtain the system folder pathname returns %WINDOWS%\System32, but for a 32-bit process it returns %WINDOWS%\SysWOW64. The program folder path for a 64-bit process is Program Files’, but for 32-bit process it is Program Files (x86)’. The registry key HKEY_LOCAL_MACHINE\Software contains 64-bit process settings and data, while the key HKEY_LOCAL_MACHINE\Software\WOW6432Node contains 32-bit process settings and data.

This redirection is activated automatically when software modules call to popular pre-defined system paths or registry keys.

7. Kernel modules
The solution proposed here is for 32-bit user level DLLs, and doesn’t work with 32-bit drivers. This is because 32-bit kernel modules cannot be used on a 64-bit platform, with no exceptions or workarounds. If your product includes any kernel level module, such as a device driver, then the only possible migration route is to re-write the kernel module as a 64-bit process.

8. Setup
Using a COM OOP component in place of direct calls to a DLL requires changes to your setup procedure as the COM components must be installed and registered by the system. As discussed under ‘Security’ above, this may involve configuring dedicated credentials for the COM
component.

Read More →

SQL Server ALTER DATABASE SET TRUSTWORTHY ON

sql_server_logo

October 3, 2013 | Posted in SQL Server

Service broker not working for you? Try altering this setting in your database, it may save you a lot of time!

Guidelines for using the TRUSTWORTHY database setting in SQL Server

Summary

You can use the TRUSTWORTHY database setting to indicate whether the instance of Microsoft SQL Server trusts the database and the contents within the database. By default, this setting is set to OFF. However, you can set it to ON by using the ALTER DATABASE statement. We recommend that you leave this setting set to OFF to mitigate certain threats that may be present when a database is attached to the server and the following conditions are true:

  • The database contains malicious assemblies that have an EXTERNAL_ACCESS or UNSAFE permission setting. For more information, visit the following Microsoft Developer Network (MSDN) website:

    CLR Integration Security

  • The database contains malicious modules that are defined to execute as users that are members of a group that has administrative credentails. For more information, visit the following MSDN website:

    Extending Database Impersonation by Using EXECUTE AS

Note By default, the TRUSTWORTHY setting is set to ON for the MSDB database. Altering this setting from its default value can result in unexpected behavior by SQL Server components that use the MSDB database.

If the TRUSTWORTHY setting is set to ON, and if the owner of the database is a member of a group that has administrative credentials, such as the sysadmin group, the database owner may be able to create and to run unsafe assemblies that can compromise the instance of SQL Server.

More Information

In an Internet service provider (ISP) environment (for example, in a web-hosting service), each customer is permitted to manage their own database and is restricted from accessing system databases and other user databases. For example, the databases of two competing companies could be hosted by the same ISP and exist in the same instance of SQL Server. Dangerous code could be added to a user database when the database was attached to its original instance, and the code would be enabled on the ISP instance when the database was deployed. This situation makes controlling cross-database access crucial.
 
If the same general entity owns and manages each database, it is still not a good practice to establish a “trust relationship” with a database unless an application-specific feature, such as cross-database Service Broker communication, is required. A trust relationship between databases can be established by enabling cross-database ownership chaining or by marking a database as trusted by the instance by using the TRUSTWORTHY property. The is_trustworthy_on column of the sys.databases catalog view indicates whether a database has its TRUSTWORTHY bit set.

Best practices for database ownership and trust include the following:

• Have distinct owners for databases. Not all databases should be owned by the system administrator.
• Limit the number of owners for each database.
• Confer trust selectively.
• Leave the Cross-Database Ownership Chaining setting set to OFF unless multiple databases are deployed at a single unit.
• Migrate usage to selective trust instead of using the TRUSTWORTHY property.

The following code sample can be used to obtain a list of databases that have the TRUSTWORTHY bit ON and whose database owner belongs to the sysadmin server role.

SELECT SUSER_SNAME(owner_sid) AS DBOWNER, d.name AS DATABASENAME
FROM sys.server_principals r
INNER JOIN sys.server_role_members m ON r.principal_id = m.role_principal_id
INNER JOIN sys.server_principals p ON
p.principal_id = m.member_principal_id
inner join sys.databases d on suser_sname(d.owner_sid) = p.name
WHERE is_trustworthy_on = 1 AND d.name NOT IN (‘MSDB’) and r.type = ‘R’ and r.name = N’sysadmin’

You can run the following query to determine the TRUSTWORTHY property of the MSDB database:

select name, TrustWorthySetting =
case is_trustworthy_on
when 1 then ‘TrustWorthy setting is ON for MSDB’
ELSE ‘TrustWorthy setting is OFF for MSDB’
END
from sys.databases where database_id = 4

If this query shows that the TRUSTWORTHY property is set to OFF, you can run the following query to set the TRUSTWORTHY property to its default value.

ALTER DATABASE MSDB SET trustworthy ON
go

The following table provides more information on the products or tools that automatically check for this condition on your instance of SQL Server and the versions of the SQL Server product that the rule is evaluated against.

Rule software Rule title Rule description Product versions against which the rule is evaluated
System Center Advisor SQL Server msdb system database has trustworthy option set to OFF System Center Advisor checks whether the database option Trustworthy is set to OFF for the msdb system database for this instance of SQL Server. If it is set to OFF, then advisor generates an alert. Review the information provided in the “Information Collected” section of the advisor alert and follow the resolutions that are provided in this article. SQL Server 2008
SQL Server 2008 R2
SQL Server 2008 R2 Best Practice Analyzer (SQL Server 2008 R2 BPA) Trustworthy Bit The SQL Server 2008 R2 Best Practice Analyzer (SQL Server 2008 R2 BPA) provides rule to detect when a database has the TRUSTWORTHY property set to ON and whether the database owner is member of the sysadmin server group. If you run the BPA tool and receive a warning that is titled “Engine – Trustworthy Bit,” we recommend that you run the query that is listed in the previous section to identify the database that has the TRUSTWORTHY bit set and that you consider changing the setting to OFF. SQL Server 2008
SQL Server 2008 R2
SQL Server 2012 Best Practice Analyzer (SQL Server 2012 BPA) Trustworthy Bit The SQL Server 2012 Best Practice Analyzer (SQL Server 2012 BPA) provides rule to detect when a database has the TRUSTWORTHY property set to ON and whether the database owner is member of the sysadmin server group. If you run the BPA tool and receive a warning that is titled “Engine – Trustworthy Bit,” we recommend that you run the query that is listed in the previous section to identify the database that has the TRUSTWORTHY bit set and that you consider changing the setting to OFF. SQL Server

Read More →

Monitoring Website Downtime With Google Docs

web_dev_logo

October 3, 2013 | Posted in Web

Do you have a website? Would you like to receive alerts on your email address as soon as your site goes down or becomes inaccessible to users?

You can either subscribe to one of these website monitoring services to track your site’s uptime (and downtime) or you can build your own site monitor with the help of Google Docs. The one big advantage with Google Docs is that it can be configured to check your website status every minute and thus you’ll instantly know if your site is down. And it’s free.

Website Monitor - Logs

Website Monitoring with Google Docs

It will take a minute to configure Google Docs as an uptime monitor for your website.

  1. Assuming that you are signed-in to your Google Account, click here to make a copy of the Google Docs sheet.
  2. Put your website’s URL in cell E3 and your email address in cell E5. This is the address where you wish to receive notifications for downtime and uptime.
  3. Go to Tools – > Script Editor – > Resources – > Current Script’s Triggers and set a Time-Driven trigger for every minute (or 5 minutes).
  4. Save the Trigger and Google Docs will show a big red warning asking for authorization. Just accept it and Save the trigger again.

That’s it. The Google Docs sheet will monitor your website in the background at all times and will send an email alert whenever it has trouble accessing the website. If the issue is resolved, you’ll get another notification saying “all’s well.”

Also, the downtime details (including the server responses) are logged in the Google Docs sheet itself so you can later analyze the downtime in greater detail. You can browse the source code to understand how monitoring works behind the scenes.

One more thing. If you know a bit of programming, create a new Twitter app specifically for tracking your site’s uptime and Google Docs can then alert you through tweets and SMS text messages if your site is having problems.

 

Author: 

Original source: http://www.labnol.org/internet/website-uptime-monitor/21060/

Read More →

How To Format The UltraFormattedTextEditor Value Using FormattedTextUIEditorForm

infragistics_logo

October 3, 2013 | Posted in Infragistics

The use and displaying of correctly formatted text in the UltraFormattedTextEditor Infragistics control is somewhat cumbersome, especially when you are dealing with some formatted text that somebody happened to copy and paste from another application such as Microsoft Word. In my case, this was being done in some administration screen, which stored a value that the user later wanted to display in a nicely formatted way; I chose to use the UltraFormattedTextEditor control which seemed the logical path to go down for displaying the required value.

I soon realised that I needed to limit how the formatted text was stored in SQL Server, so I opted to use the FormattedTextUIEditorForm, which is the form you see in design mode when looking at the UltraFormattedTextEditor control. Unfortunately this is not exposed with the out of the box Infragistics products (which is a great shame), you can however get to it if you have their source code. If you have this, you can find it in the SupportDialogs folder, and from there the FormattedTextEditor folder; the class you are interested in is FormattedTextUIEditorForm.cs

It was a bit of a pain to import Infragistics2.Win.SupportDialogs.csproj in to my solution and to get it to compile, but once this was working, I could call it directly like so (which is exactly what I needed to supply the user with to format their text how I needed it to be stored in the administration screen):

 

Try
Dim result As DialogResult
Dim value As String = String.Empty

Using formattedTextDialog As New FormattedTextUIEditorForm()

If (Me.DescriptionTextbox.Text.Trim().Length > 0) Then
formattedTextDialog.Value = Me.DescriptionTextbox.Text
End If

result = formattedTextDialog.ShowDialog()

value = CStr(formattedTextDialog.Value)

End Using

If (result = DialogResult.OK AndAlso value.Length > 0) Then
Me.DescriptionTextbox.Text = value
End If

Catch ex As Exception
‘handle exception
End Try

 

I needed to capture the value string (which is actually the raw text value) and store this in SQL Server, which would later get served up to the UltraFormattedTextEditor control. I did have an issue finding all of the required icons, I seemed to find them all except a few, including left align, right align etc. If you’ve used this in design view, you’ll know that you get the value tab which shows the text as you want it displayed, and also the raw text tab which is the formatted text, including the required html tags that you need to display in the UltraFormattedTextEditor control. I’ve included two screen shots showing my implementation; if it helps somebody else trying to overcome the same hurdle as me, then it was worth putting this article together:

Infragistics have since suggested that another solution would be to use the “Ribbon with Formatted Text Editor” from their installed samples. I’ve checked it out and it does actually do what I needed it to do, so this may be a simpler approach to achieving your goal, however I still like that fact that the FormattedTextUIEditorForm shows the raw text value on the fly.  See the screenshot below:

Read More →

Win32 API user32.dll – SendMessage() Function

net_logo

October 3, 2013 | Posted in .Net

“user32.dll implements the Windows USER component that creates and manipulates the standard elements of the Windows user interface, such as the desktop, windows, and menus. It thus enables programs to implement a graphical user interface that matches the Windows look and feel. Programs call functions from Windows USER to perform operations such as creating and managing windows, receiving window messages (which are mostly user input such as mouse and keyboard events, but also notifications from the operating system), displaying text in a window, and displaying message boxes.

Many of the functions in User32.dll call upon GDI functions exported by Gdi32.dll to do the actual rendering of the various elements of the user interface. Some types of programs will also call GDI functions directly to perform lower-level drawing operations within a window previously created via User32 functions.”

I needed to suspend the mdi from painting whilst displaying child forms in an application. The following code illustrates how this can be achieved:

 

[DllImport("user32.dll")]
public static extern int SendMessage(IntPtr hWnd, Int32 wMsg, bool wParam, Int32 lParam);
private const int WM_SETREDRAW = 11;

private void DoSomething()
{
// suspend drawing
SendMessage(this.Handle, WM_SETREDRAW, false, 0);

// do something here

// resume drawing
SendMessage(this.Handle, WM_SETREDRAW, true, 0);

// repaint client area
this.Refresh();
}

For more information on SendMessage(), see http://msdn.microsoft.com/en-us/library/windows/desktop/ms644950%28v=vs.85%29.aspx

Read More →

The perils of GC.Collect (or when to use GC.Collect)

net_logo

October 3, 2013 | Posted in .Net

An interesting article (linked to other interesting articles) regarding garbage collection, and when to force the collector to free memory when you’re seeing OOM exceptions.

http://blogs.msdn.com/b/scottholden/archive/2004/12/28/339733.aspx

Read More →

Where’s the hash key on a Mac?

apple_logo

October 3, 2013 | Posted in Mac

You’ll have noticed that the # character is missing on the Macbook Pro (mine is mid 2012).

Yes it’s not on the keyboard mapping which is a bit weird and frustrating, anyway you can simply press ALT+3 to get the required character.

If like me you are running Windows in Parallels Desktop 8 for Mac, the key combination is CTRL+ALT+3.

Voila.

Read More →

Exchanging Data Between Different Parallels Desktop For Mac Virtual Machines

apple_logo

October 3, 2013 | Posted in Mac

Parallels Explorer multiwindow mode makes it easy to manage the contents of different virtual machines. Now, you can copy data from one virtual machine to another by simply dragging it. Since Parallels Explorer supports not only Parallels file formats, you can also exchange data between Parallels virtual machines and third-party virtual machines.

Note: If you copy files to a Windows virtual machine using Parallels Explorer or Parallels Mounter, you need to log in to Windows as an administrator to be able to open them inside the virtual machine.

To copy a file from a third-party virtual machine to a Parallels virtual machine

  1. Open Parallels Explorer.
  2. Click the Virtual Machines icon in the sidebar.
  3. Choose the third-party virtual machine where the file will be placed in the working area. The list of volumes used by the virtual machine appears.
  4. Choose the volume where the file is stored.
  5. Choose Open New Window from the File menu. One more Parallels Explorer window opens.
  6. In the working area, choose the Parallels virtual machine that will store the file. The list of volumes used by the virtual machine appears.
  7. Choose the volume where the copied file will be placed.
  8. Open the destination folder.
  9. To copy the file, drag it from the third-party virtual machine window to the Parallels virtual machine window.

To copy a file from one Parallels virtual machine to another Parallels virtual machine

  1. Open Parallels Explorer.
  2. Click the Virtual Machines icon in the side bar.
  3. In the working area, choose the virtual machine (for example, Windows XP) that stores the file. The list of volumes used by the virtual machine appears.
  4. Choose the volume where the file is stored.
  5. Choose Open New Window from the File menu.
  6. In a new Parallels Explorer window, click the Virtual Machines icon in the sidebar.
  7. Choose the virtual machine where the file will be placed (for example, Windows 98) in the working area. The list of volumes used by the virtual machine appears.
  8. Choose the volume where the copied file will be placed.
  9. Open the destination folder.
  10. To copy the file, drag it from the Windows XP virtual machine window to the Windows 98 virtual machine window.

Read More →

Higgs boson-like particle discovery claimed at LHC

science_logo

October 3, 2013 | Posted in Science

The moment when Cern director Rolf Heuer confirmed the Higgs results

Cern scientists reporting from the Large Hadron Collider (LHC) have claimed the discovery of a new particle consistent with the Higgs boson.

The particle has been the subject of a 45-year hunt to explain how matter attains its mass.

Both of the Higgs boson-hunting experiments at the LHC see a level of certainty in their data worthy of a “discovery”.

More work will be needed to be certain that what they see is a Higgs, however.

Prof Stephen Hawking tells the BBC’s Pallab Ghosh the discovery has cost him $100

The results announced at Cern (European Organization for Nuclear Research), home of the LHC in Geneva, were met with loud applause and cheering.

Prof Peter Higgs, after whom the particle is named, wiped a tear from his eye as the teams finished their presentations in the Cern auditorium.

“I would like to add my congratulations to everyone involved in this achievement,” he added later.

“It’s really an incredible thing that it’s happened in my lifetime.”

The CMS team claimed they had seen a “bump” in their data corresponding to a particle weighing in at 125.3 gigaelectronvolts (GeV) – about 133 times heavier than the protons that lie at the heart of every atom.

“Start Quote

We’re reaching into the fabric of the Universe at a level we’ve never done before”

Prof Joe Incandela CMS spokesman

They claimed that by combining two data sets, they had attained a confidence level just at the “five-sigma” point – about a one-in-3.5 million chance that the signal they see would appear if there were no Higgs particle.

However, a full combination of the CMS data brings that number just back to 4.9 sigma – a one-in-two million chance.

Prof Joe Incandela, spokesman for the CMS, was unequivocal: “The results are preliminary but the five-sigma signal at around 125 GeV we’re seeing is dramatic. This is indeed a new particle,” he told the Geneva meeting.

Atlas results were even more promising, at a slightly higher mass: “We observe in our data clear signs of a new particle, at the level of five sigma, in the mass region around 126 GeV,” said Dr Fabiola Gianotti, spokeswoman for the Atlas experiment at the LHC.

Peter Higgs Peter Higgs joined three of the six theoreticians who first predicted the Higgs at the conference

Prof Rolf Heuer, director-general of Cern, commented: “As a layman I would now say I think we have it.”

“We have a discovery – we have observed a new particle consistent with a Higgs boson. But which one? That remains open.

“It is a historic milestone but it is only the beginning.”

Commenting on the emotions of the scientists involved in the discovery, Prof Incandela said: “It didn’t really hit me emotionally until today because we have to be so focussed… but I’m super-proud.”

Dr Gianotti echoed his thoughts, adding: “The last few days have been extremely intense, full of work, lots of emotions.”

A confirmation that this is the Higgs boson would be one of the biggest scientific discoveries of the century; the hunt for the Higgs has been compared by some physicists to the Apollo programme that reached the Moon in the 1960s.

Statistics of a ‘discovery’

Swiss franc coin
  • Particle physics has an accepted definition for a discovery: a “five-sigma” (or five standard-deviation) level of certainty
  • The number of sigmas measures how unlikely it is to get a certain experimental result as a matter of chance rather than due to a real effect
  • Similarly, tossing a coin and getting a number of heads in a row may just be chance, rather than a sign of a “loaded” coin
  • A “three-sigma” level represents about the same likelihood as tossing eight heads in a row
  • Five sigma, on the other hand, would correspond to tossing more than 20 in a row
  • Independent confirmation by other experiments turns five-sigma findings into accepted discoveries

Scientists would then have to assess whether the particle they see behaves like the version of the Higgs particle predicted by the Standard Model, the current best theory to explain how the Universe works. However, it might also be something more exotic.

All the matter we can see appears to comprise just 4% of the Universe, the rest being made up by mysterious dark matter and dark energy.

A more exotic version of the Higgs could be a bridge to understanding the 96% of the Universe that remains obscure.

Scientists will have to look at how the Higgs decays – or transforms – into other, more stable particles after being produced in collisions at the LHC.

Dr Pippa Wells, a member of the Atlas experiment, said that several of the decay paths already showed deviations from what one would expect of the Standard Model Higgs.

For example, a decay path where the Higgs transforms into two photon particles was “a bit on the high side”, she explained.

These could get back into line as more statistics are added, but on the other hand, they may not.

“We’re reaching into the fabric of the Universe at a level we’ve never done before,” said Prof Incandela.

“We’re on the frontier now, on the edge of a new exploration. This could be the only part of the story that’s left, or we could open a whole new realm of discovery.”

 

The Standard Model and the Higgs boson

Standard model

• The Standard Model is the simplest set of ingredients – elementary particles – needed to make up the world we see in the heavens and in the laboratory

• Quarks combine together to make, for example, the proton and neutron – which make up the nuclei of atoms today – though more exotic combinations were around in the Universe’s early days

 Leptons come in charged and uncharged versions; electrons – the most familiar charged lepton – together with quarks make up all the matter we can see; the uncharged leptons are neutrinos, which rarely interact with matter

• The “force carriers” are particles whose movements are observed as familiar forces such as those behind electricity and light (electromagnetism) and radioactive decay (the weak nuclear force)

• The Higgs boson came about because although the Standard Model holds together neatly, nothing requires the particles to have mass; for a fuller theory, the Higgs – or something else – must fill in that gap

 

By Paul Rincon Science editor, BBC News website, Geneva

Read More →