I have just been following a thread about using reporting services 2005 to produce a 55,000 row excel spreadsheet. The initiator of the thread was concerned that the report was locking up and was looking for a fix.
I could suggest migrating to SQL Server 2008 as it has much better memory management and pipelining to ensure large reports run as expected.
A more considered response is that this looks to me like a data extraction task which would be better suited to designing in integration services. This package could then be scheduled overnight and take load off the reporting server during the working day to make it more responsive to other users.
Even that answer is not really the right one, as you have to wonder what this user is doing with that data and how it will be protected once it has been saved locally. You also have to question how long the data will be relied on as it could become obsolete very quickly (especially in the uncertain commercial world we live in now).
So my considered answer is to buy the end user a cup of coffee and see what he really needs, and then work on that.
Often the issue is one of symantics. The user specifies their requirement as a "report" in the form of a spreadsheet. This can then drive people down the route of using a particular tool.
The other reason why a second best tool might be used is the desire to keep things simple and use one tool for all exports/reports rather than having a mixture.
I like to tell users that as soon as they print something out, it becomes out of date. It might not always be true but it should save paper.
Hmmm.. I'll be comparing this post with the post you write once Gemini is released, particularly the penultimate paragraph.
Gemini will allow the user to add data to the data warehouse exactly avoiding and reversing this issue. Of course that data could well be unclean or unreliable but all data warehsoues have a certain degree of this.
Thanks Andrew but I was under the impression Gemini was primarily about data consumption and allowing the user to create their own 'data warehouse' (read: lots of disparate, unconnected data warehouses) from whatever data source(s) they care to use. Have I got that wrong?