Movatterモバイル変換


[0]ホーム

URL:


Skip to content
DEV Community
Log in Create account

DEV Community

Jeremy Davis
Jeremy Davis

Posted on • Originally published atblog.jermdavis.dev on

     

Ooops… Looks like WFfM broke analytics…

I got a fun support ticket recently, for a client on Sitecore v8.1 whose analytics processing had stopped working. Their outstanding queue of raw analytics data that needed to be aggregated was slowly growing, and some functionality which relied on the existence of recent analyics data had stopped working. Trying to fix this caused me to look at two issues which are related to how Web Forms for Marketers data is processed in analytics – and these seemed like issues that other people who do support work might need info on…

First off, how do you know the analytics queue is growing anyway?

Good question! The queue lives in the “Processing Pool” collection in MongoDB’s “Tracking Live” database. So you can spin up your favourite Mongo analysis tool and count the rows in that table. As aRobo 3T / RoboMongo user, I run the querydb.getCollection('ProcessingPool').count() and look at the results:

You can watch this number over time, and see what’s happening to your queue…

While I was doing this I realised being able to get this data in a more automated way would be helpful, so I worked out how to do the same query in PowerShell:

Add-Type-Path'C:\inetpub\wwwroot\yourWebsiteFolder\Website\bin\MongoDB.Driver.dll'$client=New-Object-TypeNameMongoDB.Driver.MongoClient-ArgumentList"mongodb://yourMongoServer:yourPort"$server=$client.GetServer()$database=$server.GetDatabase("tracking_live")$val=$database.GetCollection('ProcessingPool').count()write-host"----"write-host"At:$(Get-Date-Format'dd/MM/yyyy HH:mm')"write-host"Count:$val"
Enter fullscreen modeExit fullscreen mode

Note that Sitecore does batch processing of this data, so you’ll find that the number in this table will tend to climb a bit, until the processing engine kicks in and processes a batch of data. Hence needing to look at it over a few minutes to see what the trend is.

And on the server I was looking at, this queue was pretty huge😉

So what problems was I seeing?

Issue One:

The first big pile of errors of I found in the logs looked like this:

4544 13:07:23 ERROR Exception when storing an aggregation result into reporting database. Item will be postponed and retried later.Exception: System.Data.SqlClient.SqlExceptionMessage: Failed to insert or update rows in the [Fact_FormSummary] table.Failed to insert or update rows in the [Fact_FormSummary] table.Failed to insert or update rows in the [Fact_FormSummary] table.Failed to insert or update rows in the [Fact_FormSummary] table.Failed to insert or update rows in the [Fact_FormSummary] table.Failed to insert or update rows in the [Fact_FormSummary] table.Source: .Net SqlClient Data Provider   at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)   at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)   at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)   at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, SqlDataReader ds, Boolean describeParameterEncryptionRequest)   at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite)   at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite)   at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.Commit(SqlDataApi api, IReportingStorageItemBatch batch, Func`2 filter)   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.<>c__DisplayClass6.<StoreBatch>b__1()   at Sitecore.Data.DataProviders.NullRetryer.ExecuteNoResult(Action action, Action recover)   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.StoreBatch(IReportingStorageItemBatch batch)   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.Store(IReportingStorageItemBatch batch)
Enter fullscreen modeExit fullscreen mode

Luckily for me, Google solved this one quickly, asDmytro Shevchenko has done a really helpful post covering this issue.

Applying his patch got rid of these log errors, and the processing queue started to go down again…

Issue Two:

…but after a day or so of happy processing, a new error appeared:

6680 10:27:47 ERROR Exception when storing an aggregation result into reporting database. Item will be postponed and retried later.Exception: System.Data.SqlClient.SqlExceptionMessage: Column, parameter, or variable @p59. : Cannot find data type Fact_FormSummary_Type.Source: .Net SqlClient Data Provider   at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)   at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)   at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)   at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, SqlDataReader ds, Boolean describeParameterEncryptionRequest)   at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite)   at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite)   at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.Commit(SqlDataApi api, IReportingStorageItemBatch batch, Func`2 filter)   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.<>c__DisplayClass6.<StoreBatch>b__1()   at Sitecore.Data.DataProviders.NullRetryer.ExecuteNoResult(Action action, Action recover)   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.StoreBatch(IReportingStorageItemBatch batch)   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.Store(IReportingStorageItemBatch batch)
Enter fullscreen modeExit fullscreen mode

Looking at the database, I couldn’t find anything called “Fact_FormSummary_Type” in the Sys.Objects table, so the error was probably accurate even if it wasn’t very helpful.

Sadly Google didn’t help me here, as the only reference I got back wasa community forums post about a different message. The fix for that talked about a permissions issue with the database, so I checked that anyway – but the site I was working on appeared to have a connection string user that had these permissions.

Having done a bit more digging and made no progress, I tried speaking to Sitecore Support. They quickly pointed out that this is a known bug. Though sadly a poorly documented one – hence this post.

They say the fix is included in WFfM v8.2 Update 6 – so they recommend updating your Sitecore / WFfM instance to at least that version if possible. But if you can’t do that, then speak to support about Bug #120569 to get access to their patch.

Having applied that patch to the instance of Sitecore I was looking at, I stopped getting this error in my log, and my processing queue started going down again.

And hopefully that’s the last one of these issues I need to resolve before the server gets it’s queue emptied…

Top comments(1)

Subscribe
pic
Create template

Templates let you quickly answer FAQs or store snippets for re-use.

Dismiss
CollapseExpand
 
pieterbrink123 profile image
Pieter Brinkman
I run global Technical Marketing @Sitecore. I have a strong passion for communities and knowledge sharing. Next to my dayjob I love automating my home in anyway possible.
  • Location
    The Netherlands
  • Work
    Sr. Director Technical Marketing at Sitecore
  • Joined

Thanks for sharing. Great practical tip on how you can watch your analytics queue with Mongo Analytics tool.

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment'spermalink.

For further actions, you may consider blocking this person and/orreporting abuse

Herder of cats: Professionally as a Sitecore MVP and architect for @unrvldagency. Parentally as a father. Literally as a cat owner.
  • Location
    Leeds, UK
  • Work
    Solution Architect at UNRVLD
  • Joined

More fromJeremy Davis

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Log in Create account

[8]ページ先頭

©2009-2025 Movatter.jp