How to set up more frequent merge replications in SQL Server 2000

Posted: - Modified: | geek, work

The quick answer: Set up continuous merge replication, then set its polling interval to the number of seconds you’d like.

The slightly-more-detailed answer:

  1. After you set up merge replication, find your agent in the replication monitor under “Merge Agents”.
  2. Right-click on your agent and choose Agent properties.
  3. Click on the Steps tab.
  4. Find the step named Run agent. Double-click on it to edit its properties.
  5. Edit the command and add -PollingInterval number-of-seconds (ex: -PollingInterval 1). You might also want to minimize logging by adding -HistoryVerboseLevel 0 and -OutputVerboseLevel 0 to improve performance. (More performance tips)

Step by step:

Set up your tables for replication:

Use Enterprise Manager to create the tables you need on both servers, if they don’t already exist.

Configure publishers and subscribers:

  1. Right-click on Replication and choose Configure Publishing, Subscribers, and Distribution.
  2. Click on Publishers. Select your server.
  3. Click on Databases. Check the Merge checkbox for your database.
  4. Click on Subscribers. Select the other server.
  5. Click on OK.

Create the publication:

  1. Right-click on ReplicationPublications. Choose New Publication.
  2. Follow the wizard to create a publication including the tables you want to replicate.
  3. Right-click on the publication and choose Push New Subscription.
  4. Follow the wizard to create a subscription for the second server.

Change the polling interval:

  1. Click on Replication Monitor – Agents – Merge Agents to view the list of replication agents.
  2. Right-click on your agent and choose Agent properties.
  3. Click on the Steps tab.
  4. Find the step named Run agent. Double-click on it to edit its properties.
  5. Edit the command and add -PollingInterval number-of-seconds (ex: -PollingInterval 1). You might also want to minimize logging by adding -HistoryVerboseLevel 0 and -OutputVerboseLevel 0 to improve performance. (More performance tips)

The backstory:

What’s an open source web development geek doing with Microsoft SQL Server 2000? It’s a long story involving PHP, AJAX, a critical situation at work (previous developer left IBM without a good transition plan), and a contact center. I had fun building the dashboard that the client needed, but I sweated my way through setting up the SQL Server 2000 replication because I was trying to piece it together from disparate web pages, and I forgot to account for delays in replication. Result: the dashboards on the different servers got out of sync.

I found myself getting stressed out. I felt inexperienced and frustrated, and I hated leaving clients hanging. I felt overwhelmed by all the information available on the Net, but I couldn’t find exactly the kind of information I was looking for. I felt confused by the options and the version differences. So I did what I always do in difficult situations: I wrote my way through it.

Writing helped me organize my questions and thoughts. I needed to figure out what kind of replication would fit our needs. Merge replication made the most sense, because two-phase commits could be problematic in failover situations.

I knew I needed to figure out how much time it took for updates to propagate from one dashboard to another. I measured it with a stopwatch, listening for the audio alerts from my dashboard: updates took about 45 seconds, which was too long.

When I read that continuous updates were like scheduling the updates to run every minute (an expected average of 30s for updates, then), I almost gave up. It was still too long, and I didn’t know whether what I wanted was even possible with the version I had.

Fortunately, I came across a brief mention of PollingInterval, which sounded promising. After lots of sifting through search pages, I found enough to work with. I combined thoughts from a forum thread about merge agents, a tip on changing your polling interval, and transactional replication performance tuning tips.

I tested it with a different database until I was confident about the steps to take and the possible results, and I reported the progress at our checkpoint call. Once the clients gave me the go ahead, it took me ten minutes to make the changes and maybe forty minutes of checking, double-checking, timing, and cleaning up. Results: the new configuration reduced average wait times from 45 seconds down to 4.6s, with observed maximums of 6.7s and a lucky minimum of 2.0s. Decently fast, considering the Javascript dashboard checked for updates every 5 seconds. I checked the network usage, too. Latency looked stable, so the server wasn’t getting overloaded.

It felt great to solve that, and on my own, too. We’ll see if the problem stays solved, but it looks good.

You can comment with Disqus or you can e-mail me at sacha@sachachua.com.