SQL.RU
 client/server technologies
 Главная | Документация | Статьи | Книги | Форум | Блоги | Опросы | Гостевая | Рассылка | Работа | Поиск | FAQ |
   
#352<<  #353  >>#354

СОДЕРЖАНИЕ

1.СТАТЬИ
1.1.Основы ввода-вывода Microsoft SQL Server. Часть 2 (продолжение)
2.ССЫЛКИ НА СТАТЬИ
2.1.Статьи на русском языке
2.2.Англоязычные статьи
3.ФОРУМ SQL.RU
3.1.Самые популярные темы недели
3.2.Вопросы остались без ответа
4.КНИГИ
4.1.Освоение Microsoft SQL Server 2005
 

 

СТАТЬИ

Основы ввода-вывода Microsoft SQL Server. Часть 2 (продолжение)

По материалам статьи Bob Dorr: Microsoft SQL Server I/O Basics. Chapter 2

Копирование при записи и разреженные файлы

Моментальный снимок базы данных содержит образы страниц данных, которые были изменены после создания снимка. Создание снимка базы данных сопровождается немедленным откатом активных на момент создания транзакций для снимка базы данных. Для создания нового снимка активные транзакции в первичной базе данных должны быть прокручены назад, таким образом, чтобы получить правильный момент времени снимка. Сами транзакции в первичной базе данных остаются при этом неизменными.
В целях экономии физического дискового пространства, снимки базы данных хранятся в специальных разреженных файлах. Это лимитирует требования к физическому дисковому пространству снимка базы данных и изменению образа снимка. Чем больше страниц данных изменяется в исходной базе данных, тем больший физический размер нужен для хранения снимка базы.
Файлы для снимка базы данных именуются в команде CREATE DATABASE. Например, родительская для снимка база данных может называться main.mdf, а снимок может называться snapshot_main.mdf.
SQL Server 2005 для страниц данных снимка использует копирование при записи. Перед тем, как внести изменения на первичную страницу базы данных, она копируется и сохраняется во всех связанных с базой данных снимках. Для администраторов важно контролировать физический размер снимка баз данных, чтобы хорошо представлять и прогнозировать требования к хранилищу на будущее. Обратите внимание, что наименьшим модулем распределения разреженного файла является 64КБ. Поэтому дисковое пространство может исчерпываться быстрее, чем Вы ожидаете.
Для получения более подробной информации о снимках база данных, см. Как работают моментальные снимки базы данных в SQL Server 2005 Books Online.

Стадии повтора и отката процесса регенерации используют журнал транзакций, и предназначены для того, чтобы поддерживать целостность баз данных на уровне транзакций. Однако, эта логика не применима к снимкам базы данных. Обратите внимание, что у снимков баз данных нет журналов транзакций. Поэтому, всё связанное с копированием при записи должно быть закончено во время транзакции в первичной базе данных.
Снимок использует комбинацию вызовов программного интерфейса для того, что бы распределить регион в 64КБ разреженного файла. Кроме того, в оперативной памяти поддерживается специальная битовая маска распределения, позволяющая отследить отдельные страницы, которые сохраняются в снимке. Во время создания или расширения файла снимка, SQL Server устанавливает соответствующие атрибуты файловой системы таким образом, чтобы все незаписанные регионы возвращали цельный нулевой образ в ответ на любой запрос чтения.
После начала запроса на изменение в первичной базе данных, страница данных записывается в какой-либо снимок базы данных и соответственно изменяется хранящаяся в оперативной памяти битовая маска распределения. Поскольку это может привести к вводу-выводу на снимке базы данных, при внедрении решения с копированием при записи важно учитывать подобную дополнительную нагрузку. Определение того, когда необходимо скопировать страницы базы данных, требует использования некой комбинации вызовов разных программных интерфейсов и осуществления запросов на чтение. Исходя из этого, запросы на чтение могут быть направлены в снимок базы данных, что позволяет определить, скопирована ли страница данных в снимок базы.
В тот момент, когда SQL Server выполняет операцию копирования при записи в разреженный файл, и для этого необходимо получить дополнительное физическое место в хранилище, операционная система может выполнить эту запись синхронно. Для предотвращения синхронного характера записи, руководящий процессом копирования при записи планировщик SQLOS может выполнять запись используя вторичных исполнителей из исполнительского пула. На практике, если существуют несколько снимков одной и той же первичной базы данных, запись может быть распараллелена, и выполнена с использованием нескольких исполнителей. Инициализирующий исполнитель ожидает, пока вторичные исполнители завершат запись, чтобы продолжить изменения первоначальной страницы данных. Когда SQL Server ждет завершения записи, изначальный исполнитель может находиться в состоянии ожидания записи реплики, или ожидания краткой блокировки ввода-вывода реплики страницы данных.
Даже если транзакция будет откачена, страница данных записывается в снимок. Разреженный файл занимает место в физическом хранилище после того, как произойдёт запись. После этого уже невозможно так откатить эту операцию, что бы вернуть задействованное физическое дисковое пространство.

Обратите внимание: Если операция копирования при записи окончится неудачно, снимок базы данных будет отмечен как подозрительный и SQL Server выдаст соответствующую ошибку.

Важно понимать, что увеличение числа операций копирования при записи (фактически, числа операций записи или чтения, необходимых для определения того, нужно ли копирование при записи), может повлиять на динамику производительности некоторых запросов. Например, скорость ввода-вывода снимка базы данных может в некоторой степени лимитировать масштабируемость запроса. Поэтому, снимок базы данных должен обслуживаться высокоскоростной подсистемой ввода-вывода.

Обратите внимание: Такие утилиты, как: WinZip, WinRAR, COPY и другие не поддерживают фактическую целостность разреженных файлов. Когда такой файл скопирован с помощью подобных утилит, все не распределённые байты будут считываться и восстанавливаться как нули. Для правильного копирования нужно реальное распределение и восстановление файла без потери атрибут разреженного файла. Для копирования разреженных файлов нужно использовать такие утилиты, которые умеют получать полный образ и восстанавливать структуру метаданных файла.

ПРОДОЛЖЕНИЕ СЛЕДУЕТ

[В начало]

ССЫЛКИ НА СТАТЬИ

Статьи на русском языке

Перестройка индексов, длина очереди диска и др.
Nancy Michell
В. Как перестраиваются индексы в SQL Server? Я хочу понять, почему при запуске команды DBCC DBREINDEX занимается все дисковое пространство, а если команду выполнить не удается, пространство не высвобождается. Размер базы данных 90 ГБ, размер самой большой таблицы - 70 ГБ.

Худшие методы (MS SQL Server) - чувствительные к регистру базы данных (или что-нибудь еще)
Andy Warren
Мы уже рассмотрели некоторые интересные идеи и получили большое число откликов читателей. На этой неделе я хотел бы обсудить тему, предложенную читателем (и ведущим колонки!) Mindy Curnutt, - случая, когда вся база данных чувствительна к регистру. Может ли кто-нибудь утверждать, что чувствительность к регистру не добавляет уровень сложности? Написание хорошего программного обеспечения, вне зависимости от используемого вами языка, достаточно сложно и без этой проблемы!

Форсированное тестирование процедур T-SQL
Ден Сойер
Использование динамических запросов и web-отчетов для автоматизации тестов: Раньше тестирование процедур, как правило, являлось заключительной "санитарной" проверкой перед выпуском новой процедуры: четко ли работает процедура и выполняет ли все возложенные на нее задачи? Если ответ положительный, то можно выпускать процедуру и двигаться дальше. Только у некоторых администраторов баз данных находилось время заняться нетипичными сценариями использования, такими как неверные значения параметров, отсутствие данных или недоступные объекты, например рабочие таблицы. Но этот линейный подход больше не работает. Теперь известно, что хранимые процедуры, как и другое программное обеспечение, перед выпуском необходимо тестировать и перепроверять до тех пор, пока они не будут соответствовать установленным стандартам. И здесь нас ожидает препятствие, потому что тестирование модуля - процесс непростой. Оно требует трудоемкого ввода тестовых значений, отслеживания результатов теста в попытке определить, выполняет ли процедура все то, что предполагалось - и ничего более.

Производительность жёстких дисков: оценка THG
Дмитрий Чеканов
Если сравнить достижения в области развития жёстких дисков с видеокартами или центральными процессорами, то легко можно заметить, что широкая публика, в общем-то, не осведомлена о каких-либо технологиях. Рынок жёстких дисков кажется скучным, но это только на первый взгляд. На самом деле, рынок жёстких дисков постоянно движется вперёд, плотность записи и производительность продолжают увеличиваться. Впрочем, за исключением ёмкости, уследить за этим прогрессом среднему потребителю сложно. Даже эксперты иногда не могут различить два похожих жёстких диска, если бы не этикетка с характеристиками, хотя их производительность может сильно отличаться. Если сравнивать жёсткие диски со схожими техническими спецификациями, скажем, винчестеры в одной линейке, измеряемая разница всё равно есть.

О бедном LINQ-е замолвите слово...
Иван Бодягин
Впервые, завеса тайны над C # 3.0 и проектом LINQ была приоткрыта примерно полтора года назад, когда создатели и идейные вдохновители этого проекта, Дон Бокс и Андрес Хэйлсберг, выступили на PDC 2005 и поведали миру о том, что же это такое. С тех пор этот проект неоднократно обсуждался как в программерских форумах и блогах, так и на больших конференциях, в том числе и в России - Platforma , DevDays , WebDevCon , ect .… Однако время шло, релиза все не случалось, интерес угасал, и проект начал обрастать всяческими слухами, естественно, имеющими мало общего с действительностью, как приличным слухам и положено. В последнее же время, в силу ряда объективных причин - весна, полнолуние, неотвратимая близость релиза C # 3.0, LINQ , следующей версии Фреймворка и студии, разговоры о LINQ и C # 3.0 вспыхнули с новой силой, и со всей очевидностью стало ясно, что ясности в этом вопросе в народе нет. Осознав всю степень бедствия, я хочу попробовать написать несколько сообщений в блог, призванных устранить сумбур, восполнить пробел и навести прочий порядок в умах разработчиков, в меру моих скромных сил.… Впрочем, это сверхзадача, первоочередная же задача менее амбициозна - не запутать еще больше… Возможно из этого и статья какая-никакая получится...

ЗАЩИТИМ КОМПЬЮТЕР ОТ СЕТЕВЫХ АТАК
Д.т.н., проф. А.Ю.Щеглов
На сегодняшний день одну из серьезнейших (если не основную) угроз ИТ-безопасности составляет возможность запуска на защищаемом компьютере несанкционированной программы, с чем связано наибольшее количество критичных сетевых атак. Это обусловливается тем, что именно с использованием подобных программ, в большинстве случаев, и осуществляется атака на защищаемые ресурсы, причем, как на информационные, так и на системные. При этом существуют совершенно различные способы внедрения и запуска вредоносных программ, совершенно различны их свойства и назначение. К таким программам относятся и эксплойты, как правило, запускаемые злоумышленниками с использованием уязвимостей, связанных с ошибками программирования системного и прикладного ПО, и всевозможные деструктивные, шпионские, троянские программы, которые весьма разнородны по реализуемым ими целям атаки и по своему построению, это и вирусы, модифицирующие исполняемые файлы санкционированных программ. Совершенно различными могут быть и способы внедрения деструктивного кода, как удаленно - из сети, так и локально, в том числе, и санкционированным пользователем (инсайдером). В последнем случае уместно рассматривать деструктивную программу, как инструментальное средство инсайдера, которое может им использоваться для взлома системы защиты. А вот если бы не дать в принципе запускать на защищаемом компьютере несанкционированные (в том числе, и вредоносные) программы? Не это ли является единственно возможным эффективным решением множества совершенно различных, да к тому же еще, и наиболее сегодня актуальных, проблем ИТ-безопасности? Только представьте себе, насколько такое решение, если оно будет найдено, повысит уровень ИТ-безопасности, уровень защиты конфиденциальной информации! Так существуют ли возможность и средства эффективной защиты компьютера от сетевых атак?

SS2K5: Что делать, если в ответ на DBCC CHECKDB возвращается ошибка 5030
Александр Гладченко
Выяснилась ещё одна недокументрованная особенность, позволяющая обойти требование по монопольному использованию базы данных, во время её проверки командой DBCC CHECKDB. Если не переводит базу в однопользовательский режим доступа, в ответ на попытку проверки базы выводятся показанные ниже сообщения:

Рекомендации по выбору системного программного обеспечения Microsoft при внедрении решений на платформе 1С:Предприятие
Ионин А., Сырцов И., Елисеев В.
Решения на платформе 1С:Предприятие являются наиболее распространенными среди российских предприятий малого и среднего бизнеса. Качество и удобство решения целиком и полностью зависит от компании, которая внедряет решение и проводит обучение. Однако, на практике встречается целый ряд вопросов, напрямую не относящихся ни к функциональности 1С:Предприятия, ни к компетенции технического специалиста 1С. К таким вопросам можно отнести вопросы лицензирования продуктов Microsoft, технические моменты процесса установки программного обеспечения, влияющие на производительность, надежность и безопасность решения, варианты организации хранения информации и доступа к ним.

[В начало]

Англоязычные статьи

ADO.NET Orcas: Sample Provider
David Sceppa
Last month, the ADO.NET team hosted a number of ADO.NET Data Provider writers on campus to present information on how to enhance an existing provider to support the Entity Framework features in the upcoming Orcas release. To help provider writers who were unable to attend, we're publishing the material we presented to our team blog.

Conchango's Hairiest Man (2006)
Mick.Horne
It has been a while since my last post. There are a number of reasons for this - lack of time, the fact that blogging comes quite low on my list of fun things to do and mainly because the blog posts I had lined up on my laptop were dealt a severe blow following a United Airlines flight from LAX to Oakland (where my laptop bag was taken care of by their Valet Service) and a taxi ride from Oakland to San Ramon (where my laptop went in the boot). I can't actually prove when or where something speared a hole through the external casing, however what I can say is that I sent emails immediately before I boarded the flight at LAX and the next morning the laptop was dead. I'm also fairly sure that there wasn't a man hidden in the boot of the taxi with a portable hammer drill or someone jumping up and down on it in my hotel room while I slept.

VSTS4DBP: Check for permissions
Jamie Thomson
On my current project we are using Visual Studio Team System for Database Professionals aka Datadude to manage the deployment of SQL Server objects to our various dev, test, QA and production environments. In doing that I came across a bit of a problem that can easily occur if you are not dliligently testing your deployment scenarios.

On Versatility and Being "Business Intelligent"
Adrian Downes
For some time I worked with a small BI consultancy (back when it was called a Data Warehousing consultancy). I had recently transitioned to this company from a prior role where data warehousing was more of a pastime than anything else. Thus, I was excited that BI would become a full-time pursuit. What I would come to learn from this relatively small consultancy was that the percentage technical work was itself less in proportion to the business case assessments and project management tasks involved in a given BI project. I was a young hot-head at the time, and found myself often on the defensive in light of ambiguity and the seemingly changing requirements and priorities mid-stream from project to project. I found difficulty in understanding the financial and operational terms used by our customers, and, also found myself at odds with a certain project manager who seemed to be more interested in saying "yes" to each and every change request without consulting the technical consultants (myself included). By the time I was ready to move on, I had a gutful of what I believed to be "corporate B.S.", and felt that I was best served to "stay technical". On my last day, I ventured over to head-office to hand in my laptop. The managing partner asked me aside, and mentioned that he was aware of my displeasure with how certain projects were (mis)handled. Instead of asking further questions, he simply said:

Dear DBA ....
Jeff Smith
Thanks so much for helping us developers out with the latest changes you've made to the stored procedures in our system. While it may have been nice if we got together first to discuss these changes, I do appreciate that you worked very hard to make things much easier for us developers. Despite the great changes you've implemented to assist us, I am afraid I must ask for *further* help in a few matters. I appreciate your patience.

SQL Server 2005: Using PARTITION and RANK in your criteria
Jeff Smith
The RANK and PARTITION features in 2005 are simply amazing. They make so many "classic" SQL problems very easy to solve. For example, consider the following table:

On the imperfect nature of reader/writer lock policies
Joe Duffy
One of the motivations of doing a new reader/writer lock in Orcas (ReaderWriterLockSlim) was to do away with one particular scalability issue that customers commonly experienced with the old V1.1 reader/writer lock type (ReaderWriterLock). The basic issue stems from exactly how the lock decides when (or in this case, when not) to wake up waking writers. Jeff Richter's MSDN article from June of last year highlights this problem. This of course wasn't the primary motivation, but it was just another straw hanging off the camel's back.

Replicating Identity Columns
Andy Warren
SQL Server 2005 has changed replication dramatically and one of the changes is that subscriber tables have the identity column. SQL Server guru Andy Warren takes a look at what impact this has on transactional replication.

SQL Server 2005 Adoption Survey Results
Steve Jones
We recently ran a survey at SQLServerCentral.com and got over 600 responses. Here are the results and some commentary from Steve Jones

Turn AUTO_SHRINK off!!
Paul Randal
This week's topic is data file shrinking. I've seen lots of mis-information in the last few weeks and I've had a bunch of questions about it. First up is auto-shrink.

An Interview With Ken Henderson About The Forthcoming Book SQL Server 2005 Practical Troubleshooting: The Database Engine
Denis
I am a big fan of Ken Henderson's books, I believe that every SQL Server developer should have a copy of his books. When I noticed on Amazon.com that the book SQL Server 2005 Practical Troubleshooting: The Database Engine which listed Ken Henderson as its editor would be coming out soon I got very excited. I decided to send Ken an email to see if he would be willing to answer some questions I had about the book. To my surprise Ken was more than willing to accommodate my request.

Interview With SQL Server MVP Louis Davidson: Author Of Pro SQL Server 2005 Database Design and Optimization
Denis
After interviewing Ken Henderson I decided to try my luck with Louis Davidson. By now you already know that he said yes ;-). Louis is the author of Pro SQL Server 2005 Database Design and Optimization and a SQL Server MVP. I have a copy of his book and I highly recommend it to anyone who wants to learn about designing and optimizing databases. The question-and-answer session with Louis that follows was conducted via email.

Best practices for Domain Account for SQL Server services
sqlmaster
For every DBA this is a big question about performing or following best practices to apply on domain account for SQL Server services. I've done some searching, but have found no definite answer yet. We have a various setup of SQL farm with in our environment with servers are members of Active Directory Services and also run SQL services using an ADS account.

Find Gaps in a Range
WARDY IT
Common Table Expressions (CTE's) are a new structure in SQL Server 2005 that can be used to assist with performing recursive operations. The following example demonstrates how a CTE can be used to find the gaps in a range.

SSAS Many to many optimization technique #1
Erik Veerman
This has been something brewing for a few weeks that I am just getting to writing down… and yes, naming this idea #1 does imply that I have another idea :) so stay tuned. Before diving into the details, the summary is that this approach when applied against the AdventureWorksDW database was able to take the intermediate measure group rowcount down from approx 16,000 to only 19. Wow… because the size of the intermediate measure group has a big impact on performance. Lets call this the "Many to Many Matrix Relationship Optimization" approach for SSAS M:N relationships… I've been working on this with Dan Hardan (Msft) on this because of a need to make the Many to Many relationships perform better. As a background, there are many kinds of many to many relationships, but if I can summarize two of the common kinds…

Windows Server 2003 Service Pack 2 - the good, the bad, and the ugly
Rodney R. Fournier
By now you are aware that SP2 is out for Windows Server 2003 and R2. I have already seen quite a few posts in the public newsgroups where people are not aware of a few things.

SQL Server 2005 SP2 - Trace Event Change - Connection Based Events
Bob Dorr
Starting with SQL Server 2005 SP2 the connection based events are also produced for sp_reset_connection activities, indicating the connection was reset. The pattern will look like the following

Using SSIS object model from C++
Michael Entin
All the samples in MSDN show how use SSIS API from C# or VB.NET code. Is it possible to do this from C++ code? The answer is yes, if you know COM it should be quite easy for you, and the COM API is very close to .NET API (I think .NET API is nicer due to rich type system and properties).

Can you trust your constraints?
Hugo Kornelis
SQL Server allows you to temporarily disable your CHECK and FOREIGN KEY constraints. I would not recommend this as part of your regular production schedule, but there are certainly cases where this comes in handy.

Blocking from Foreign Key Constraint look ups - Clustered (blocks) v Heap (doesn't block)
Tony Rogerson
When using foreign key constraints watch out for blocking problems - remember for each reference clause and row you are inserting or updating (if the column you are updating is part of a foreign key reference) SQL Server will do a look up query to make sure the corresponding row exists in the referenced (parent) table.

Reporting Services SP2 and MDX Summary Rows
Greg Galloway
If you have built any Reporting Services reports which use an Analysis Services datasource, SQL Server 2005 SP2 contains a fairly major change to be aware of. I'm going to describe the way it worked before SP2, then I'll describe the change.

Using and Managing Database Mail
Gregory A. Larsen
With SQL Server 2005 comes a new mail sub-system, called Database Mail. Database Mail is a great improvement over SQL Mail that came with earlier versions of SQL Server. Now you are able to set up multiple accounts and profiles to support your different application email needs. This article will discuss setting up and managing Database Mail.

Improve query performance with SQL Server 2005 execution plans
Baya Pavliashvili
One of the most exciting and useful additions to the query performance tuner's arsenal is the ability to force the database engine to use a pre-generated query execution plan. Previous versions of SQL Server allowed a DBA to specify query hints, to perhaps alter the transaction isolation level, or to enforce usage of a given index or a join type. With SQL Server 2005, you can actually create a full execution plan and feed it to a query. You can also inject query hints into an application, without modifying the code.

SQL Server XML Cribsheet
Robyn Page
For things you need to know, rather than the things you want to know. This is written with the modest ambition of providing a brief overview of XML as it now exists in SQL Server, and the reasons for its presence. It is designed to supplement articles such as Beginning SQL Server 2005 XML Programming.

Sales Order Workshop Part II
Jacob Sebastian
In the first part of this series, Jacob Sebastion showed us how to use XML to save a sales order to the database with a variable amount of line items. In part 2, he continues looking at XML in SQL Server 2000 with some advanced XML processing.

Pivot table for Microsoft SQL Server
Peter Larsson
One of the seeminly more popular enhancements in SQL Server 2005 to T-SQL is the PIVOT operator. There have been quite a few articles, but new author Peter Larsson decomposes in detail how you can perform this operation with previous versions.

How does your schema affect your shrink run-time?
Paul Randal
For part two of my short series on data file shrinking, I want to look at how elements of your schemas can cause shrink to take much longer to run. In SQL Server 2005, three things in your schema will drastically affect the run-time of data file shrink.

What is the new licensing scheme that is part of Microsoft SQL Server 2005 Service Pack 2 (SP2)?
John Savill
A. With SQL Server 2005 SP2, you can now run an unlimited number of virtual instances of SQL Server on a physical box that's licensed for SQL Server 2005 Enterprise Edition. This will allow organizations to try new combinations of SQL Server instances to test the performance of their servers.

My Last Interview With Jim Gray
John Foley
A few weeks before Microsoft researcher Jim Gray set sail on his ill-fated voyage from San Francisco Bay, we had an e-mail exchange. As a journalist who's covered Microsoft since the mid '90s, I got to know Microsoft's database genius as someone who was invariably helpful, and I contacted Jim for a story I was researching on Hewlett-Packard's move into the data warehousing market. As always, he knew the answers to my questions and offered more than I had asked him for. He also made an observation that, in light of his disappearance, now seems eerily prophetic.

SQL Server Compact Edition and Synchronization Updates
OakLeaf
Steve Lasker's Creating your SQL Server Compact Edition database and schema in code post proposes to create your local SSCE database on the fly from T-SQL commands that you store as an application resource. SSCE's DDL doesn't accept multiple (batched) commands but SQL Server Management Studio [Express] (SSMS[X]) SP2 hides this limitation from you by allowing GO to separate batched commands.

Explanation Behind the Incorrect Query Results
Jesse Orosz
After about a week and a half of ongoing discussions we've come up with an explanation as to why the queries were returning bad numbers. Thank goodness!

Tricks to help DBA's get more sleep
Matt Hollingsworth
If you are a DBA, the chances are that your sheep counting is interrupted by a pager or cellphone going off in the middle of the night. While you can't eliminate it, having an explicit Service Level Agreement will help you to handle outages that occur and to establish an understanding of the level of uptime that the business users will get given the level of funding for the systems. Availability is complicated by the fact that it means different things to different people, but one thing is clear - your ultimate availability is measured by the users of the applications who access the database. Taking into account this perspective, you can clarify how you pursue your system availability investments by ensuring you consider and capture the availability system design goals and resulting technologies using a framework:

MTBF is useless - how to decide when to use RAID for Database High Availability
Matt Hollingsworth
Building reliable database systems is complicated by the fact that, as engineers, we tend to think about system design in a narrow way that is usually focused on the individual technologies. In order to make the right choices about when to use specific high availbility technologies, it is important to back up and take a big picture view. The first step is to understand what it means for a database system to be highly available. Sounds simple, right? What we typically want to enable is the end user of the database application to achieve a specific level of servicability - and this is defined by an explict or implicit service level agreement. The service level agreement should be formally defined and include the response time for specific operations, availability times, data consistency needs, times services will be available, need for protection against site failure etc. Once the SLA is understood, we can take the next step and evaluate how to acheive the best uptime based on technology that's available.

Bulk Data Loading for the interfaces
Decipherinfosys
A key design for interfacing with another application where the integration is not real time and the volume of data involved is high is that the data should get into the system as fast as possible. Whether that data is in the form of XML of just data records doesn't matter. We had a similar situation at a client site over the weekend. The client was integrating with another application and they get their data loads in the form of an XML document. This data is provided to them every 3 hours and they are supposed to receive the XML, consume it and process it as well as send back an XML document detailing any errors that occured in the load so that the host system could correct it on their end. It was also required to store the error messages within their system and to provide the supervisor the ability to correct any issues with the data that was provided so that they do not have to wait for another 3 hours to get the data again. The host system in this case was a mainframe system which was capable of providing the data only ever 3 hours since it required a lot of processing to be done.

Creating your SQL Server Compact Edition database and schema in code
Steve Lasker
In most cases you'll want to create your database and it's schema as part of your applications first deployment. Depending on the scenario, you may create it on the client, or create it on the server, do the initial sync, then stream the file down to the client pre-populated.

Beer and diapers revisited - not just an urban legend
Donald Farmer
These are days of turmoil and upheaval in the Business Intelligence world. So it's good to hear of a story with a happy ending. And what could be happier than discovering that an urban legend is, in fact, true? You know the old tale of the supermarket chain mining their data to discover that sales of beer were uncannily linked to sales of diapers. And every time you hear it, someone is on hand to debunk it.

Someone I admire
Nick Barclay
I admire a lot of people. They are people who do things that I aspire to or who do things that I can't. I need to add another to my list as a result of a client I worked for this week.

SSIS - Some random tidbits
Phil Brammer
After responding to a few posts over on the SSIS forum today, I thought I'd post a few helpful tidbits.

Sending Header and Detail to different destinations
John Welch
A question came up today on the SSIS forum about processing flat files that included header and detail information. The poster wanted to send header information to one destination and detail information to another. The header rows have 2 columns, the detail rows have 5.

Comparing methods for translating values in SSIS
John Welch
A question came up recently on the SSIS forums on translating lookup codes into a description within the data flow. That is, given code "A", how do you translate it into a more descriptive value, such as "Apple"?

Using For Each to Iterate a Resultset
John Welch
Update: Kirk Haselden has a great example of this that I found right after I posted this. http://sqljunkies.com/WebLog/knight_reign/archive/2005/03/25/9588.aspx. There's been a number of questions about iterating through a collection of items and performing an operation for each item in the SSIS forums recently, so I put together a quick example of doing this.

Writing a Resultset to a Flat File
John Welch
This is another example package. This one retrieves a resultset and stores it in a variable. It then uses a data flow with a custom script source to read each row of the resultset and output it as a delimited string. Why would you want to do this? Well, it enables you to treat a resultset generically from a data flow perspective. There is only a single "column" defined in the metadata. So this works well if you want to export an unknown number of columns from a resultset to a flat file.

Quick MS Access (JET SQL) to SQL Server (T-SQL) Cheatsheet
Jeff Smith
Lots of questions come up in the SQL Team forums about conversions between Access and T-SQL and some of the differences between the two SQL dialects. Here's a few handy things to help you out with converting your projects. Check in now and then as this short list will eventually grow as more things come up.

More with SQL Server 2005 : Top x Per Group, Paging, and Common Table Expressions
Jeff Smith
I previously wrote about a few of the new features in SQL 2005 and how they can be used to solve some old "classic" SQL problems very easily, and I thought I'd briefly discuss a few more. None of this is earth-shattering stuff, but you may find seeing a bunch of these techniques listed in one place useful.

Named Pipes Provider, error: 40 - Could not open a connection to SQL Server
MING LU
This error was most frequently hitted by our customers, and in this post, give a brief summary of troubleshooting tips for this specific error message. First, take a look at below MSDN forum link lists about this topic:

A world without Disk Seek Time - the Solid State Disk really is here
Tony Rogerson
I've been keeping an eye on this space, first with Gigabit's I-RAM which offers 4GB of battery backed RAM drive that acts as a SATA drive but lately with Samsungs SSD.

Logical Functions: IsAncestor(): Conditional Logic within Filter Expressions
William Pearson
In IsAncestor(): Conditional Logic within Calculations, another article within my MDX Essentials series, we introduced the IsAncestor() function from the perspective of its use within a calculation. We discussed the straightforward purpose of this logical function, to return whether or not a specified member is an ancestor of another member we specify (By "ancestor," of course, we mean a member from which the specified member is descended within a dimensional hierarchy.) We discussed the manner in which IsAncestor() manages to do this, and ways we can leverage the operator to support effective conditional logic to meet various business needs within our own environments.

Learning Guide: SQL Server performance tuning A to Z
SearchSQLServer.com
To ensure your SQL Server databases and applications run at their full potential, you must continually monitor and identify performance problems and quickly address bottlenecks. This guide will help you do all that and more. It covers everything SQL Server performance, from the very best to the very worst practices and performance-tuning myths. If you'd like to see a specific topic covered regarding SQL Server performance, send us an e-mail or ask expert Jeremy Kadlec for help.

Protect against SQL Injection by whitelisting
Steven Andres
OK, admittedly the best practice for guarding against SQL Injection attacks is to white-list acceptable characters. However, given that knowledge is power, is there a comprehensive list of special characters (such as the single-quote or double-hyphen) available for SQL Server? I have done a ton of searches and can't seem to find one (I did find the reserved words).

Storing IPs in SQL Server
David Poole
An IP address is something we all recognize and is a piece of data that is quite prevalent in many systems. However it is a piece of data tha presents some challenges in its storage and retrieval. SQL Server guru David Poole presents us with a look at how you can work with this strange formatting.

PowerSMO at Work Part 2
Dan Sullivan
My previous article, PowerSMO at Work I, demonstrated how to create and deploy secure, signed DBA scripts and then showed how to use PowerSMO functions to manage the extended properties of SQL Server objects.

SQL Server Backup I/O Performance
Linchi Shea
I had always thought that: SQL Server backup reads/writes sequentially, and SQL Server backup could fully utilize the throughput of the I/O path

How do I insert a single quote (') into a Microsoft SQL Server database?
John Savill
A. Usually the single quote character is reserved for inserting information into a database and so if a single quote is contained in the actual data to be inserted, the command often becomes corrupted. To solve the problem simply replace any single quote with two quotes (not the double quote character but two single quote characters. The sample command below performs this replacement automatically: sContent = Replace(sContent,"'","'") where sContent contains the data that needs single quotes replaced so that the data can be inserted into a database.

Disaster recovery worst practices: Don't test your backups
Peter Herbener
Consultant Peter Herbener has compiled a series of disaster recovery worst practices based on his experience. One way to really mess up is not testing your backup method adequately. If you're struggling with the basics, here are some tips to get back on track.

Disaster recovery worst practices: Don't look at your logs
Peter Herbener
Consultant Peter Herbener has compiled a series of disaster recovery worst practices based on his experience. Implementing a state-of-the-art backup system won't help if you're not paying attention to the logs. If you're struggling with the basics, here are some tips to get back on track.

Disaster recovery worst practices: Don't perform backups on a regular schedule
Peter Herbener
Consultant Peter Herbener has compiled a series of disaster recovery worst practices based on his experience. The number one mistake is not backing up on a regular schedule. If you're struggling with the basics, here are some tips to get back on track.

Disaster recovery worst practices: Save money on backup media
Peter Herbener
Consultant Peter Herbener has compiled a series of disaster recovery worst practices based on his experience. Trying to save money on backup media often backfires. If you're struggling with the basics, here are some tips to get back on track.

Scaling out with distributed partitioned views in SQL Server 2005
Tim Chapman
A distributed partitioned view joins data from one or more SQL Server databases. You can design these views in such a way that the query optimizer will know which table to pull data from in your query.

Giving Permissions through Stored Procedures
Erland Sommarskog
When designing an application for SQL Server, you rarely want users to have full permissions to access the tables in the database. Many applications are designed to perform all database access through stored procedures, and it is through the stored procedures users can access and update data. The procedures perform validations of business rules to protect the integrity of the database.

Update Status Field after Expiry Date
Joe Celko
SQL Apprentice Question: Consider the following table

Microsoft Patent Application for the Entity Framework and EDM
OakLeaf
The U.S. Patent and Trademark Office (PTO) published on March 8, 2007, U.S. Patent Application 20070055692, "Incremental Approach to an Object-Relational Solution." The inventors are the usual data programmability team suspects: Michael Pizzo, Pablo Castro, Jose Blakeley, Andrew Conrad, Alyssa Henry, et al. The application was filed on February 28, 2006, about four months before the first post on the Data blog and the once-missing ADO.NET 3.0 Entity Frameworks documentation reappeared.

Clustering terms made easy
Rodney R. Fournier
Clusters are Highly Available and should never be considered Fault Tolerant.

Computed Columns
Decipherinfosys
As per database normalization rules, a computed columns (Column value is derived from the value(s) of some other columns) should not be stored in the schema. They should be always calculated at runtime like (Average, total etc.). But, at times, it is required to store such calculations in the schema - sometimes to help with performance issues at the time of the reads like calculating and storing whether a given date falls on a business day or on a weekend/list of holidays, storing complex mathematical calculations etc.. We ran into a situation for one of our client where we needed to store the reverse value of the one of the columns to enforce security in the system. In this case, we decided to go for computed column since the runtime changes would have meant changes to the application and since the application was going to do reads on this column a lot. We will show you how we have achieved this in Oracle, MS SQLServer and DB2.

Conditional Joins in SQL Server
Jeff Smith
Sometimes, when writing SELECTs, we come across situations in which we we need to write a join containing a condition of some sort. You might think to do this with either a CASE expression or with some OR boolean logic in your JOIN expression. Often, you might encounter syntax errors, performance problems, or results not being returned the way you expect when trying this. There's a much better way to approach the problem.

PerformancePoint CTP2 - Monitoring and Analytics Update
David Francis
I'm obviously getting a bit tardy in my old age, as I meant to do this last week when the news came out (29/03/2007). Shocking I know.

PerformancePoint CTP1 to PerformancePoint CTP2 - No upgrade path, reinstall
David Francis
The latest from Microsoft is that although there will be an upgrade path from CTP2 to CTP3, there is no such thing for going from CTP1 to CTP2. We are talking reinstall. I asked Mark Yang at Microsoft if there were any structural changes to the two planning databases (Application and Staging) and this is his answer.

Sales Order Workshop Part III
Jacob Sebastian
In the previous articles, Jacob Sebastian looked at using XML to save a sales order with variable numbers of line items to a SQL Server 2000 database. In this part, he expands upon the processing to access that data from multiple nodes.

Using Multiple Active Result Sets (MARS)
Mladen Prajdic
SQL Server 2005 has so many new features that in my opinion if you read only BOL for a year you'd find something new every day. One of those is Multiple Active Result Sets or MARS. Multiple Active Result Sets is a new SQL Server 2005 feature that, putting it simply, allows the user to run more than one SQL batch on an open connection at the same time.

Hello OUTPUT See You Later Trigger Or Perhaps Not?
Denis
SQL Server 2005 has added an optional OUTPUT clause to UPDATE, INSERT and DELETE commands, this enables you to accomplish almost the same task as if you would have used a after trigger in SQL Server 2000

SQL Server 2005 Upgrade Handbook
Douglas McDowell, Erik Veerman, and Michael Otey
If your organization depends on SQL Server today, prepare for SQL Server 2005-an upgrade that delivers immediate results for existing applications and continues to deliver results as you enhance existing applications and develop new applications to fully exploit the new capabilities. Today's business environment demands a comprehensive data-management platform that delivers business results with efficiency. SQL Server 2005 provides a comprehensive data-management platform, integrating the development and management of relational data as well as extraction, transformation, and loading (ETL), OLAP, and reporting with the security, performance, and availability to meet the needs of the most demanding enterprise applications.

Rev Up Performance with Custom Connectors
Douglas McDowell, Jay Hackney
One of the most dramatic changes in SQL Server 2005 is the redesign of Data Transformation Services (DTS); renamed SQL Server 2005 Integration Services. SSIS is a strong extraction, transformation, and loading (ETL) product that offers insane performance, a large catalog of dynamic components, a sound deployment model, and flexibility and extensibility. Extensibility has been a strong suit for Microsoft products, and the company has concentrated on building robust platforms that meet common customer needs but that customers can extend to meet uncommon needs as well. As you consider the role SQL Server 2005-and particularly SSIS-will fulfill in your environment, you need to understand what platform extensibility will let you do.

Implementing Referential Integrity and Cascading Actions
Itzik Ben-Gan and Thomas Moreau
This article is a chapter excerpt from the book Advanced Transact-SQL for SQL Server 2000 from APress Books. This chapter covers the use of declarative referential integrity as well as the use of triggers to maintain data integrity. (42 printed pages)

Pulling Out the Stops
Tom Moreau
One of a DBA's many responsibilities is to ensure that both referential and data integrity are maintained. This month, Tom Moreau shows you how to break the rules and get away with it when the need arises. When a new version of SQL Server (or any other product) is released, I try to make time to reflect on applications I've delivered under the old version to see if there's anything in the new release that would have made me design something differently. Just such a situation occurred when I received version 7.0 of SQL Server. I was the DBA on a project at a major Canadian bank. We were managing the distribution of banking cards, and we had to manage cards and customers. A customer could have many cards, but a card belonged to only one customer once it had been assigned. Until the card was assigned, however, the Customer ID had to be NULL. Now, consider this: The bank can have millions of customers and millions of cards. Cards spend most of their lives assigned to a customer. It would be a real pity to have NULLs just for that small percentage of unassigned cards. (See my article, "Much Ado About Nothing," in the July 1999 SQL Server Professional.) One solution is to make the Customer ID column NOT NULL in the Card table and then insert a dummy row in the Customer table. I like to avoid having dummy data, however, since every query you write against such a table has to account for the fact that you have dummy data. Wouldn't it be nice if you could insert the row into the Card table with, say, 0 as the Customer ID? You could then assign it to a real Customer later. This would also permit you to do a query for those cards with a Customer ID of 0 the unassigned cards to determine the number of cards that have yet to be granted to customers. Well, SQL7 allows you to do exactly that. Now you can temporarily turn off constraints and triggers without actually dropping them.

Looking at Lookups
Tom Moreau
Tom Moreau has seen enough cries for help with Lookups in the Usenet newsgroups that he's decided that it's time again to put pen to paper-or fingers to keyboard, or whatever we call it in the new millennium. Lookups are used with Data Pumps and Data-Driven Queries (DDQs). As the name implies, they allow you to look up data from a given data connection based upon a value or set of values that you give it. You can have as many Lookups for each Data Pump or DDQ as you like. Each Data Pump or DDQ has its own collection of Lookups associated with it-the DTSLookups collection.

SQL Server 2000's INSTEAD OF Triggers
Tom Moreau
SQL Server 2000 introduces INSTEAD OF triggers. They might not solve all of your problems, but they can help you in some weird situations. And did you know that they're also permitted on views? Tom Moreau explains.

Feeding XML to a Stored Procedure
Tom Moreau
Let's face it-XML (eXtensible Markup Language) is coming your way soon. Now, while Tom Moreau's not an XML kind of guy-he's of the T-SQL persuasion-he does see a use for it when you want to add some hierarchical data through a single stored procedure. Read on.

Identity Crisis
Tom Moreau
Up until the release of SQL Server 2000, using IDENTITY columns on tables that had triggers was a bit tricky. If you didn't know what you were doing, you could get, well, "unexpected" results. This month, Dr. Tom Moreau investigates the IDENTITY column and the @@IDENTITY function.

Setting Limits with Triggers
Tom Moreau
Dr. Tom Moreau found an interesting problem on the Usenet. The poster wanted to limit the number of detail rows for an order to a maximum of four. As you might have guessed, Dr. Tom took the question as a challenge: "I thought it would be an interesting exercise to implement the answer by building a trigger."

Dr. Tom's Workshop: Generating Sequence Numbers
Tom Moreau
Very often, developers need a table that has a sequence of numbers. You know the drill-the number of rows can vary, but they must be guaranteed to be consecutive. Ach, but what about gaps? Pull up a seat while Dr. Tom Moreau shows you how to get there. Transact-SQL is a powerful language, but as I (and many others) have said before, its power comes from data retrieval and manipulation. Generating a series of sequential numbers isn't something it can do "out of the box." Of course, you want that in the form of a table, since that's how we SQL jocks like to see our data. No arrays for us, no sirree. Oh, and yes, you (and the developer and the users) also want to be able to determine in advance how long the sequence will be. That sounds like a multi-statement, user-defined function (UDF) to me. I must confess that sometimes, I take the path of least resistance. Yes, even Dr. Tom has his lazy moments and resorts to creating a loop (see Listing 1), iterating as many times as needed.

Dr. Tom's Workshop: A Small Fish in a Big Pond
Tom Moreau
In Dr. Tom Moreau's March 2003 column-"Sleeping with Elephants"-he explained how to handle situations where data is heavily skewed to one value. This month, he has a go at the other end of the spectrum-getting at the small minority.

Dr. Tom's Workshop: Performance Through Persistence
Tom Moreau
T-SQL allows you to solve the same problem many different ways. Sometimes, alternatives aren't that obvious but can give you a satisfying, pleasant surprise. Read on as Dr. Tom Moreau checks out some possibilities for a solution–and discovers some additional gems along the way.

Dr. Tom's Workshop: Multiple-Child Aggregation
Tom Moreau
Aggregation, which is commonly used to deliver reports to users, often also delivers poor performance. To help guarantee that you're delivering the best performance possible, you should explore different versions of the query and pick the best one. This month, look over Tom Moreau's shoulder as he shows you how to work with multiple-child aggregations.

Dr. Tom's Workshop: How Table Variables Can Speed Up Your Queries
Tom Moreau
For years, you've seen Tom Moreau solve many T-SQL problems, eschewing such ugly approaches as temp tables, their table variable cousins, and–worse yet–cursors. This month, Tom makes a concession, showing that the exception proves the rule. Along the way, he shows you how to create what amounts to an index on a table variable. Very cool.

MSDE (SQL 2000) to SQL 2005 DTS Woes
R. Aaron Zupancic
I've been battling a peculiar issue for a couple of days regarding Data Transformation Services (DTS) with SQL Server 2000 and SQL Server 2005. I wrote a small C# application that utilizes DTS to pull a database from a server down to a client machine. All in all, the application runs beautifully and the information synchronizes seamlessly. Occasionally, however an odd error was reported: Unable to connect to source server for Transfer. This was odd because all of my connections were succeeding. It was also only on this computer that the error was occurring - I could not duplicate the error on other computers so I began delving into the differences between the machines.

[В начало]

ФОРУМ SQL.RU

Самые популярные темы недели

Microsoft SQL Server. Полезные алгоритмы от SQL.RU (+CD)
Конфигурация внешних RAID массивов под большие объемы данных > 1TB
рефреш больших наборов - покритикуйте
Непонятности с использованием XLOCK хинта в select
Хранение файлов в базе данных
А как передать набор записей в функцию?
Создание критично-устойчивого сервера MS SQL Server 2005
24 000 000 записей
Связь с SQL2k5 через интернет
А где обещанный full-text search с учетом русских словоформ в MS SQL 2005? (((
Есть ли вообще механизм замены курсора
не корректно отрабатывает запрос
Оптимизация скрипта тригера
SQL2k5 и влияние collation на производителность
Можно ли так в запросе ?
dbcc shrinkfile (@file, emptyfile) тормозит
Как генерировать записи?
Безвести пропал в море на своей яхте Джим Грей
Генератор паролей на SQL
Space Available

[В начало]

Вопросы остались без ответа

SP для работы с таблицей в коей есть xml тип, принимающая на вход xml. КАК?
Секционирование и игры с передвижением filegroups
XML, кодировка, загрузка, BCP
Что стоит
Конференция "ForeFront и System Center - информационная безопасность и управление инфрастр

[В начало]

КНИГИ

Освоение Microsoft SQL Server 2005

Майк Гандерлой, Джозеф Джорден, Дейвид Чанц

1104 стр., с ил.; ISBN 978-5-8459-1208-4, 0-7821-4380-6; формат 70x100/16; твердый переплет; серия Mastering; 2007, 1 кв.; Диалектика.

С помощью этого подробного руководства вы сможете быстро углубить свои знания программы Microsoft SQL Server 2005. Делая акцент не на теории, а на практике, книга позволит вам изучить реальные примеры крупномасштабных деловых приложений, дополненные целой главой, посвященной разрешению проблем, и важной информацией об улучшенных средствах подключения к базам данных. В книге описаны пять новых компонентов SQL Server 2005: служба интеграции (Integration Services), служба анализа (Analysis Services), служба уведомлений (Notification Services), служба отчетности (Report Services) и брокер служб (Service Broker). Книга предназначена для администраторов, разработчиков и пользователей SQL Server 2005.

[В начало]

#352<<  #353  >>#354
Rambler's Top100 Рейтинг@Mail.ru  Administrator: Обратная связь 
Copyright: SQL.Ru 2000-2013