This highly-upvoted SO answer recommends to put images in separate tables, even if there is only a 1:1 relationship with another table:

If you decide to put your pictures into a SQL Server table, I would strongly recommend using a separate table for storing those pictures - do not store the employee photo in the employee table - keep them in a separate table. That way, the Employee table can stay lean and mean and very efficient, assuming you don't always need to select the employee photo, too, as part of your queries.

Why? I was under the impression that SQL Server only stores a pointer to some dedicated BLOB data structure in the table, so why bother to manually create another layer of indirection? Does it really improve performance significantly? If yes, why?

How can I select all rows from table_A and then join on results from other tables, but include a NULL row to show aggregate values for those other tables that don't have a table_A_id?

WITH RECURSIVE transactions_with_children AS (
  SELECT
    table_A_id,
    other_stuff,
    1 AS depth
  FROM transactions
  WHERE transactions.parent_transaction_id IS NULL
  UNION ALL
  SELECT
    table_A_id,
    other_stuff,
    depth + 1 AS depth
  FROM transactions children
    INNER JOIN transactions_with_children parents ON children.parent_transaction_id = parents.id
)

SELECT 
  table_A.id,
  view_B.aggregate_col1, view_B.aggregate_col2,
  view_C.aggregate_col1, view_C.aggregate_col2

FROM table_A
  -- Limit table_A to only records with data from transactions_with_children, but I also need to include
  -- a null row for all transactions_with_children that don't have a table_A_id
  INNER JOIN transactions_with_children on transactions_with_children.table_A_id = table_A.id
  LEFT JOIN (SELECT
               t.table_A_id,
               aggregate_col1, aggregate_col2
             FROM transactions_with_children t
             GROUP BY t.table_A_id
            ) view_B
    ON table_A.id = view_B.table_A_id
  LEFT JOIN (SELECT
               t.table_A_id,
               aggregate_col1, aggregate_col2
             FROM transactions_with_children t
             GROUP BY t.table_A_id
            ) view_C
    ON table_A.id = view_C.table_A_id

UPDATE =======================================================

The recursive WITH tables are all scoped records from a large table. Each of the LEFT JOINs that i'm doing using this table just aggregate data from it. SUM, COUNT etc. I then join on the common table_A_id that each of the records from the WITH may or may not have.

So the problem with doing a RIGHT JOIN on transactions_with_children (I think) is that I would get tons of duplicates from transactions_with_children table. I guess I could maybe create an RIGHT JOIN sub select where I am pulling records from transactions_with_children and grouping by.

something like:

RIGHT JOIN (
  SELECT t.map_account_affiliate_id
  FROM transactions_with_children t 
  GROUP BY t.map_account_affiliate_id
) ids ON map_account_affiliates.id = ids.map_account_affiliate_id

I assume the join select above would include a null field if transactions_with_children.map_account_affiliate_id had any null values right?

This seems like a hacky way to do it, but i'm not all that great with SQL either.

I tried just simplifying the query for the purposes of the question, but maybe its easier to look at the original.

WITH RECURSIVE transactions_with_children AS (
  SELECT
    transactions.id,
    transactions.transaction_type,
    transactions.successful,
    transactions.test,
    transactions.amount_cents,
    transactions.parent_transaction_id,
    orders.map_account_affiliate_id AS map_account_affiliate_id,
    orders.t1_affiliate_tracking_id AS t1_affiliate_tracking_id,
    orders.t2_affiliate_tracking_id AS t2_affiliate_tracking_id,
    orders.id                       AS order_id,
    orders.front_end                AS front_end,
    1                               AS depth
  FROM transactions
    INNER JOIN line_items ON line_items.id = transactions.line_item_id
    INNER JOIN orders ON orders.id = line_items.order_id AND orders.campaign_id IN (104)
  WHERE transactions.parent_transaction_id IS NULL
        AND transactions.successful = TRUE
        AND transactions.transaction_type IN (3, 1, 22, 5, 55, 7, 8, 22, 30, 31, 32, 33)
  UNION ALL
  SELECT
    children.id,
    children.transaction_type,
    children.successful,
    children.test,
    children.amount_cents,
    children.parent_transaction_id,
    orders.map_account_affiliate_id AS map_account_affiliate_id,
    orders.t1_affiliate_tracking_id AS t1_affiliate_tracking_id,
    orders.t2_affiliate_tracking_id AS t2_affiliate_tracking_id,
    orders.id                       AS order_id,
    orders.front_end                AS front_end,
    depth + 1                       AS depth
  FROM transactions children
    INNER JOIN transactions_with_children parents ON children.parent_transaction_id = parents.id
    INNER JOIN line_items ON line_items.id = children.line_item_id
    INNER JOIN orders ON orders.id = line_items.order_id
  WHERE children.successful = TRUE
        AND children.transaction_type IN (3, 1, 22, 5, 55, 7, 8, 22, 30, 31, 32, 33)

)

SELECT *
FROM transactions_with_children;

SELECT
  map_account_affiliates.id                                                             AS resource_id,
  map_account_affiliates.display_name                                                   AS resource_name,
  'affiliate'                                                                           AS resource_type,
  ''                                                                                    AS parent_resource_type,
  ''                                                                                    AS parent_resource_id,
  COALESCE(sales.num_sales, 0)
                                                                                        AS num_sales,
  COALESCE(sales.num_scheduled_sales, 0)
                                                                                        AS num_scheduled_sales,
  COALESCE(sales.num_fe_sales, 0)
                                                                                        AS num_fe_sales,
  COALESCE(sales.gross_sales, 0)
                                                                                        AS total_gross_sales,
  COALESCE(sales.gross_scheduled_sales, 0)
                                                                                        AS total_gross_scheduled_sales,
  COALESCE(sales.gross_sales :: FLOAT / nullif(sales.num_fe_sales :: FLOAT, 0)
  , 0)
                                                                                        AS avg_gross_order_value,
  (
    COALESCE(sales.num_fe_sales / nullif(((SUM(sales.num_fe_sales)
    OVER ()) :: FLOAT), 0)
    , 0)
    * 100.0
  )                                                                                     AS percent_all_fe_sales,
  COALESCE((sales.gross_fe_sales :: FLOAT / nullif(sales.num_fe_sales :: FLOAT, 0)), 0) AS avg_fe_sale_amt,
  COALESCE(refunds.num_full_refunds, 0)
                                                                                        AS num_full_refunds,
  COALESCE(refunds.num_part_refunds, 0)
                                                                                        AS num_part_refunds,
  COALESCE(refunds.total_refunds, 0)
                                                                                        AS total_refunds,
  COALESCE(refunds.total_refunds :: FLOAT / nullif(sales.gross_sales, 0)
  , 0)
                                                                                        AS percent_refund_amount,
  (COALESCE(refunds.num_part_refunds, 0)
   +
   COALESCE(refunds.num_full_refunds, 0)
  ) :: FLOAT / nullif(sales.num_sales, 0) * 100.0                                       AS percent_refunds,
  COALESCE(chargebacks.num_chargebacks, 0)
                                                                                        AS num_chargebacks,
  COALESCE(chargebacks.num_chargebacks_won, 0)
                                                                                        AS num_chargebacks_won,
  COALESCE(chargebacks.total_chargebacks, 0)
                                                                                        AS total_chargebacks,
  COALESCE(chargebacks.total_chargebacks :: FLOAT / nullif(sales.gross_sales, 0)
  , 0)
  * 100.0                                                                               AS percent_chargeback_amount,
  (chargebacks.num_chargebacks) :: FLOAT / nullif(sales.num_sales, 0) * 100.0           AS percent_chargebacks,
  (
    COALESCE(sales.gross_sales, 0) -
    COALESCE(refunds.total_refunds, 0) -
    COALESCE(chargebacks.total_chargebacks, 0)
  )                                                                                     AS total_net_sales,
  (
    (
      COALESCE(sales.gross_sales, 0) -
      COALESCE(refunds.total_refunds, 0) -
      COALESCE(chargebacks.total_chargebacks, 0)
    ) / nullif(sales.num_fe_sales, 0)

  )                                                                                     AS avg_net_order_value

FROM map_account_affiliates

  LEFT JOIN (SELECT
               t.map_account_affiliate_id               AS map_account_affiliate_id,
               COUNT(DISTINCT t.order_id)               AS num_sales,
               COUNT(t.id)
                 FILTER (WHERE t.transaction_type = 22) AS num_scheduled_sales,
               SUM(t.amount_cents)
                 FILTER (WHERE t.transaction_type = 22) AS gross_scheduled_sales,
               SUM(t.amount_cents)                      AS gross_sales,
               COUNT(DISTINCT t.order_id)
                 FILTER (WHERE t.front_end = TRUE)      AS num_fe_sales,
               SUM(t.amount_cents)
                 FILTER (WHERE t.front_end = TRUE)      AS gross_fe_sales
             FROM transactions_with_children t
             WHERE t.transaction_type IN (3, 1, 22)
             GROUP BY t.map_account_affiliate_id
            ) sales
    ON map_account_affiliates.id = sales.map_account_affiliate_id
  LEFT JOIN (SELECT
               t.map_account_affiliate_id               AS map_account_affiliate_id,
               COUNT(t.id)
                 FILTER (WHERE t.transaction_type = 5)  AS num_full_refunds,
               COUNT(t.id)
                 FILTER (WHERE t.transaction_type = 55) AS num_part_refunds,
               SUM(t.amount_cents)                      AS total_refunds
             FROM transactions_with_children t
             WHERE t.transaction_type IN (5, 55)
             GROUP BY t.map_account_affiliate_id
            ) refunds
    ON map_account_affiliates.id = refunds.map_account_affiliate_id
  LEFT JOIN (SELECT
               t.map_account_affiliate_id              AS map_account_affiliate_id,
               COUNT(t.id)
                 FILTER (WHERE t.transaction_type = 7) AS num_chargebacks,
               COUNT(t.id)
                 FILTER (WHERE t.transaction_type = 8) AS num_chargebacks_won,
               (
                 COALESCE(SUM(t.amount_cents)
                            FILTER (WHERE t.transaction_type = 7), 0)
                 - COALESCE(SUM(t.amount_cents)
                              FILTER (WHERE t.transaction_type = 8), 0)
               )                                       AS total_chargebacks
             FROM transactions_with_children t
             WHERE t.transaction_type IN (7, 8)
             GROUP BY t.map_account_affiliate_id
            ) chargebacks
    ON map_account_affiliates.id = chargebacks.map_account_affiliate_id

WHERE map_account_affiliates.account_id = 1

      AND (num_sales > 0 OR num_fe_sales > 0 OR num_scheduled_sales > 0 OR num_full_refunds > 0
           OR num_part_refunds > 0 OR num_chargebacks > 0)

ORDER BY resource_name

I know this is a long messy query, but hopefully my explanation makes enough sense to get my question across.

Thanks!

This is a general design question, not related to a specific DBMS.

I will use a basic employee record to illustrate. I'm not worrying about the specific syntax, I am asking for help with the approach.

CREATE TABLE employee(
id SERIAL,
name TEXT,
manager_id INT4,
modified TIMESTAMP DEFAULT NOW(),
modified_by INT4
);

Some ways to track changes to the manager_id in the employee table -

  1. A database trigger to an audit table. Requires no additional application code, but must be kept current with the source table to be of value
  2. An audit log file that saves only the changed values in a JSON string.
  3. Move the manager_id into a separate table and add an entry for every change. The separate table would have an id, employee_id, manager_id and perhaps a current flag. Retrieval would mean the whole table would have to be searched for the most current manager.

I can't use an ORM, which is what I would really like to do.

My question:

Is it better to keep a distinct, single record of the current data or sift through a table with history of the data's state each time?

Wanna fetch data from 3 tables by mysql query to display in feed. But stack. My query is

SELECT f.leaderId, f.followerId, c.ccId, c.ccTitle, c.ccCity, c.ccCountry, c.ccFlag, c.uId, c.ccStartingDate, c.ccCreatedAt, l.uId, l.ccId
FROM followingrelations f, conferencecreate c, likesbyuser l
WHERE f.followerId = 'login_user_id'
      AND c.uId = f.leaderId
      AND c.ccFlag = 1
      AND c.ccStartingDate >= 'current_time_and_date_now' 
      OR l.uId = f.leaderId
GROUP BY c.ccTitle 
ORDER BY c.ccId DESC;

Expectation:

  1. Want to fetch data where uId and leaderId are matched from followingrelations and conferencecreate tables respectively.

  2. Beside above condition, want to fetch data also if any user likes any post. The condition I putted like this way OR l.uId = f.leaderId

Am I on right track?

I implemented this query but nothing displaying. Any idea how to solve this issue plz?

My database design is in following:

enter image description here

I have a windows 10 box with SSMS 17. I had SSMS 2016 installed, but it is now uninstalled. When I go to install the SqlServer module I get this warning:

C:\WINDOWS\system32> install-module sqlserver -Scope AllUsers

Untrusted repository
You are installing the modules from an untrusted repository. If you trust this repository, change its InstallationPolicy
value by running the Set-PSRepository cmdlet. Are you sure you want to install the modules from 'PSGallery'?
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "N"): y
PackageManagement\Install-Package : The following commands are already available on this system:'Add-RoleMember,Add-SqlAvailabilityDatabase,Add-SqlAvailabilityGroupListenerStaticIp,Add-SqlAzu
reAuthenticationContext,Add-SqlColumnEncryptionKeyValue,Add-SqlFirewallRule,Add-SqlLogin,Backup-ASDatabase,Backup-SqlDatabase,Complete-SqlColumnMasterKeyRotation,ConvertFrom-EncodedSqlName,Co
nvertTo-EncodedSqlName,Convert-UrnToPath,Disable-SqlAlwaysOn,Enable-SqlAlwaysOn,Get-SqlAgent,Get-SqlAgentJob,Get-SqlAgentJobHistory,Get-SqlAgentJobSchedule,Get-SqlAgentJobStep,Get-SqlAgentSch
edule,Get-SqlColumnEncryptionKey,Get-SqlColumnMasterKey,Get-SqlCredential,Get-SqlDatabase,Get-SqlErrorLog,Get-SqlInstance,Get-SqlLogin,Get-SqlSmartAdmin,Grant-SqlAvailabilityGroupCreateAnyDat
abase,Invoke-ASCmd,Invoke-PolicyEvaluation,Invoke-ProcessASDatabase,Invoke-ProcessCube,Invoke-ProcessDimension,Invoke-ProcessPartition,Invoke-ProcessTable,Invoke-Sqlcmd,Invoke-SqlColumnMaster
KeyRotation,Join-SqlAvailabilityGroup,Merge-Partition,New-RestoreFolder,New-RestoreLocation,New-SqlAvailabilityGroup,New-SqlAvailabilityGroupListener,New-SqlAvailabilityReplica,New-SqlAzureKe
yVaultColumnMasterKeySettings,New-SqlBackupEncryptionOption,New-SqlCertificateStoreColumnMasterKeySettings,New-SqlCngColumnMasterKeySettings,New-SqlColumnEncryptionKey,New-SqlColumnEncryption
KeyEncryptedValue,New-SqlColumnEncryptionSettings,New-SqlColumnMasterKey,New-SqlColumnMasterKeySettings,New-SqlCredential,New-SqlCspColumnMasterKeySettings,New-SqlHADREndpoint,Read-SqlTableDa
ta,Read-SqlViewData,Remove-RoleMember,Remove-SqlAvailabilityDatabase,Remove-SqlAvailabilityGroup,Remove-SqlAvailabilityReplica,Remove-SqlColumnEncryptionKey,Remove-SqlColumnEncryptionKeyValue
,Remove-SqlColumnMasterKey,Remove-SqlCredential,Remove-SqlFirewallRule,Remove-SqlLogin,Restore-ASDatabase,Restore-SqlDatabase,Resume-SqlAvailabilityDatabase,Revoke-SqlAvailabilityGroupCreateA
nyDatabase,Save-SqlMigrationReport,Set-SqlAuthenticationMode,Set-SqlAvailabilityGroup,Set-SqlAvailabilityGroupListener,Set-SqlAvailabilityReplica,Set-SqlAvailabilityReplicaRoleToSecondary,Set
-SqlColumnEncryption,Set-SqlCredential,Set-SqlErrorLog,Set-SqlHADREndpoint,Set-SqlNetworkConfiguration,Set-SqlSmartAdmin,Start-SqlInstance,Stop-SqlInstance,Suspend-SqlAvailabilityDatabase,Swi
tch-SqlAvailabilityGroup,Test-SqlAvailabilityGroup,Test-SqlAvailabilityReplica,Test-SqlDatabaseReplicaState,Test-SqlSmartAdmin,Write-SqlTableData'. This module 'SqlServer' may override the
existing commands. If you still want to install this module 'SqlServer', use -AllowClobber parameter.
At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\1.0.0.1\PSModule.psm1:1809 char:21
+ ...          $null = PackageManagement\Install-Package @PSBoundParameters
+                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (Microsoft.Power....InstallPackage:InstallPackage) [Install-Package], Exception
    + FullyQualifiedErrorId : CommandAlreadyAvailable,Validate-ModuleCommandAlreadyAvailable,Microsoft.PowerShell.PackageManagement.Cmdlets.InstallPackage

Is this normal or did something not uninstall right? I'm wondering if this is causing some issues I'm having with Set-ColumnEncryption.

I had trouble with the initial install of 2017, but then I had to delete a folder that the SSMS 2016 uninstaller did not install. I'm not sure what that folder was and can't' find a link to the fix I originally found.

I don't appear to have an old version of SqlServer installed as two people have suggested.

Loading personal and system profiles took 19921ms.
C:\WINDOWS\system32> import-module sqlserver
import-module : The specified module 'sqlserver' was not loaded because no valid module file was found in any module directory.
At line:1 char:1
+ import-module sqlserver
+ ~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ResourceUnavailable: (sqlserver:String) [Import-Module], FileNotFoundException
    + FullyQualifiedErrorId : Modules_ModuleNotFound,Microsoft.PowerShell.Commands.ImportModuleCommand

C:\WINDOWS\system32> Get-Module sqlserver -ListAvailable
C:\WINDOWS\system32>

I do however, have SQLPS modules.

C:\WINDOWS\system32> Get-Module sqlps -ListAvailable


    Directory: C:\Program Files (x86)\Microsoft SQL Server\130\Tools\PowerShell\Modules


ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Manifest   1.0        SQLPS                               {Backup-SqlDatabase, Save-SqlMigrationReport, Add-SqlAvailabilityDatabase, Add-SqlAvailabilityGroupListenerStaticIp...}


    Directory: C:\Program Files (x86)\Microsoft SQL Server\140\Tools\PowerShell\Modules


ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Manifest   14.0       SQLPS                               {Backup-SqlDatabase, Save-SqlMigrationReport, Add-SqlAvailabilityDatabase, Add-SqlAvailabilityGroupListenerStaticIp...}


C:\WINDOWS\system32>

Here is a part of our Database:

Table: Users
ID            INT
First Name    VARCHAR
Last Name     VARCHAR
Email Address VARCHAR
...

Table: Contacts
ID            INT
User_id       INT (Foreign Key)
First Name    VARCHAR
Last Name     VARCHAR
Email Address VARCHAR (Unique)
Phone Number  VARCHAR
Address       VARCHAR
...

A user can have multiple contacts (One-to-many relationship).

It is possible that a contact of one user is also a contact of another user. And the second user may try to enter different phone number/address (but same email address). I cannot allow duplicate email addresses.

One possible solution is to split contacts table into 2 tables:

Table: Contacts
ID            INT
Email Address VARCHAR
...

Table: Contact_attributess
ID            INT
User_id       INT (Foreign Key)
Contact_id    INT (Foreign Key)
First Name    VARCHAR
Last Name     VARCHAR

Is this the best way to reorganize my DB? Do you have other solutions?

We have sql server 2014 alwayson availability group set up for a database. Synchronous-commit availability mode is used. We ran a few PAL (Performance Analyzer Log) reports to obtain performance data on the primary replica and they showed IO alerts where read and writes were >25ms. The IO alerts occurred for both logical and physical IOs and on drives C:, F: (that has db files) and L: (that has transaction logs). I ran this statement:

select wait_type, waiting_tasks_count, wait_time_ms, wait_time_ms/waiting_tasks_count as 'time_per_wait' from sys.dm_os_wait_stats where waiting_tasks_count >0 and wait_type = 'HADR_SYNC_COMMIT';

and got 12ms for the time_per_wait value. This means the latency between the primary and secondary replicas were 12 ms.

My question: does the IO response time reported by PAL include this 12ms latency between the primary and the secondary replicas?

Thanks

I have two tables, Lpn and Uld that will relate on one column, that is, UldNo.

To give some background on the issue

An Lpn is a label that is assigned to one or more pieces of freight in a warehouse. A Uld is a pallet that the freight is built on so that it can go on an airplane. Every Uld has a unique identifier on it, consisting of a 3 letter type, 5 digit serial number, and 2 letter suffix that indicates the owner of the pallet. A Uld can come through a warehouse multiple times, and each time would have different Lpns (freight) on it.

To ask about the database

An Lpn row will only relate to one Uld row, and a Uld row may relate to multiple Lpn rows. The issue comes along is that UldNo cannot be the sole PRIMARY KEY (PK) for Uld because a Uld may be reused later (with different Lpns), so I created a PK of ID. If I try to create the relationship, I get an error because UldNo is not a PK. From everything I've learned, composite keys should be avoided. Is it correct to use a composite key in this situation? And if so, do I need to create the ID column in the Lpn table to create the relationship?

The DBMS I'M using is SQL Server 2016 Express.

I have a server which runs third party software (called Visualcut) to email Crystal Reports to our users.

When I log onto the Visualcut Server, I can open the software and then preview a report. This will ask me to enter a username and password (for an Microsoft SQL account) and then the report will display with no errors however, whenever I do this, an error is logged in the SQL Server error logs as follows:

10/26/2015 08:15:39 Logon Unknown Login failed for user ''. Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only. [CLIENT: 10.10.10.26]

10/26/2015 08:15:39 Logon Unknown Error: 18456 Severity: 14 State: 58.

I don't understand why the report would authenticate seemingly without any issues however, the log file states otherwise.

I would like to add that our SQL Server is configured to use mixed authentication.

PostgreSQL 9.5.6

I have two tables - t1, t2.

t1 has a title column with titles separated by spaces.

t1.title

t2 has a path column with paths separated by dashes and prefixed 'path/'

t2.path

GOAL: FOR EACH t1.title COUNT THE NUMBER OF TIMES IT SHOWS UP IN t2

Course of action: parse t2.path so that it looks like t1.title in order to do a left join and count

I'm very new to this, if you have a better approach altogether I would appreciate any suggestions

So far I have:

select t1.title, count(t2.path) as num 
from t1 left join t2 on t1.title = substring(log.path from 6) 
where t1.title like '%'||split_part(substring(log.path from 6),'-',1)||'%' 
group by articles.title; 

I am trying to produce the following:

Product Name | Color | 40 | 41 | 42 | 43 | 44 |
-----------: | ------|---:|--: |--: |---:|---:|
 COOL99-3    | Black | 70 |  1 | 71 | 16 | 30 |
 COOL99-3    | Brown | 20 | 50 | 80 | 20 | 15 |
 COOL99-12   | Black | 23 | 12 | 19 | 25 | 45 |
 COOL99-12   | Brown | 09 | 16 | 07 | 34 | 55 |
 COOL99-81   | Black | 14 | 13 | 23 | 06 | 13 |
 COOL99-81   | Brown | 18 | 19 | 48 | 20 | 15 |

I have this tables:

CREATE TABLE [dbo].[Product](
    [PID] [int] NOT NULL,
    [ProductCode] [nchar](30) NOT NULL,
    [ProductName] [nchar](200) NOT NULL,
    [Description] [nvarchar](max) NULL,
    [CostPrice] [decimal](18, 2) NOT NULL,
    [SellingPrice] [decimal](18, 2) NOT NULL,
    [Discount] [decimal](18, 2) NOT NULL,
    [VAT] [decimal](18, 2) NOT NULL,
    [Barcode] [nchar](30) NULL,
    [ReorderPoint] [int] NOT NULL,
    [OpeningStock] [int] NOT NULL,
    [Color] [nchar](150) NULL,
    [CategoryName] [nchar](150) NULL,
    [SizeID] [int] NULL,
 CONSTRAINT [PK_Product] PRIMARY KEY CLUSTERED 
)

CREATE TABLE [dbo].[Invoice_Product](
    [IPo_ID] [int] IDENTITY(1,1) NOT NULL,
    [InvoiceID] [int] NOT NULL,
    [ProductID] [int] NOT NULL,
    [CostPrice] [decimal](18, 2) NOT NULL,
    [SellingPrice] [decimal](18, 2) NOT NULL,
    [Margin] [decimal](18, 2) NOT NULL,
    [Qty] [int] NOT NULL,
    [Amount] [decimal](18, 2) NOT NULL,
    [DiscountPer] [decimal](18, 2) NOT NULL,
    [Discount] [decimal](18, 2) NOT NULL,
    [VATPer] [decimal](18, 2) NOT NULL,
    [VAT] [decimal](18, 2) NOT NULL,
    [TotalAmount] [decimal](18, 2) NOT NULL,
    [Barcode] [nchar](30) NULL,
 CONSTRAINT [PK_Invoice_Product] PRIMARY KEY CLUSTERED 
)


CREATE TABLE [dbo].[InvoiceInfo](
    [Inv_ID] [int] NOT NULL,
    [InvoiceNo] [nchar](30) NOT NULL,
    [InvoiceDate] [datetime] NOT NULL,
    [CustomerID] [int] NOT NULL,
    [SalesmanID] [int] NULL,
    [GrandTotal] [decimal](18, 2) NOT NULL,
    [TotalPaid] [decimal](18, 2) NOT NULL,
    [Balance] [decimal](18, 2) NOT NULL,
    [Remarks] [nvarchar](max) NULL,
 CONSTRAINT [PK_InvoiceInfo] PRIMARY KEY CLUSTERED 
)


CREATE TABLE [dbo].[Customer](
    [ID] [int] NOT NULL,
    [CustomerID] [nchar](30) NULL,
    [Name] [nchar](200) NULL,
    [Gender] [nchar](10) NULL,
    [Address] [nvarchar](250) NULL,
    [City] [nchar](200) NULL,
    [State] [nchar](150) NULL,
    [ZipCode] [nchar](15) NULL,
    [ContactNo] [nchar](150) NULL,
    [EmailID] [nchar](200) NULL,
    [Remarks] [nvarchar](max) NULL,
    [Photo] [image] NULL,
    [CustomerType] [nchar](30) NULL,
 CONSTRAINT [PK_Customer] PRIMARY KEY CLUSTERED 
)

With this query I can't goup by:

declare @d1 as varchar(50)='PB-0005';
WITH Sales as(
SELECT      
                      Product.ProductName, Product.CategoryName, tblSize.Size,  Product.Color,Invoice_Product .Qty 
FROM         Customer INNER JOIN
                      InvoiceInfo ON Customer.ID = InvoiceInfo.CustomerID INNER JOIN
                      Invoice_Product ON InvoiceInfo.Inv_ID = Invoice_Product.InvoiceID INNER JOIN
                      Product ON Invoice_Product.ProductID = Product.PID INNER JOIN
                      tblSize ON Product.SizeID = tblSize.SizeID
WHERE     (InvoiceInfo.InvoiceNo = @d1)  )
SELECT * 
FROM
   Sales  
   PIVOT (sum(Qty) FOR Size IN ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[7C],[8C],[9C],[10C],[11C],[12C],[13C],[40],[41],[42],[43],[44],[45]))AS P GROUP BY ProductName 
;

My environment: DB2 10.1.4 Enterprise, IBM Security Identity Manager 6.0 FP6, AIX 7.1 Each archive log have a size of 40MB

Yesterday, we do a maintainance on ISIM database by doing online backup. The backup size is around 7GB. After that, we prune the ISIM database by using db2 prune history 20151001 and delete

Then we runstat and reorg all the table in ISIM database (No automatic maintainance is set on this database and never runstat/reorg for more than 6 months) using Data Studio 4.1.0.

After we reorg the database, we runstat again and notice an increase amount of space that archive logs take up (around 3 GB of archive log generated during runstat). So we thought something might hung inside db2 process so we restart db2 using db2stop / db2start

After restart DB2, we test the system and everything look fine, around 10GB available after we restart DB2 in the night. However, in the morning we found out that all spaces are used up by archive log.

We allocate 10GB to the disk in the morning and it exhausted very fast in a way that we never seen before. Normally, when ISIM reconcile one system, we didn't even notice the change in archive log's size even it is reconcile every hour, however after last night maintanace, The same reconcile took up more than 50 GB of space even it is just one time reconcile and it slow down ISIM reconcile process as well, all of those space occupied by archive logs that seems to generate new file every second during reconcile.

Now, our HADR DB can't catch up with this amount of archive log file and all are disconnected, we decided to fix the problem on main DB first but using db2pd to check any active runstat and reorg didn't give any result. Restore the database now is not an option as it is a production database. We think that we might do offline runstat / reorg tonight but we don't sure if it can solve problem or not.

Anyone have any suggestion?

I have the following task: I want to execute fixed SELECT statement against SQL Server table with some fixed condition and I need to do it after specified period of time multiple times... Let's say N times with 1 minute interval between each execution with final goal to display only records which were returned all N times... Final goal is to check if some specific records persist in select results over time or maybe even order them by time they were returned - i.e. one record was returned in 3 first SELECT executions and interval was set to 1 minute hence I can say it was persistent for 3 minutes, another record was returned 5 times - so I can conclude it persisted / met my condition for 5 minutes.

Nature of my condition/data dictates that records not meet this condition for more than one minute and I want to catch abnormal records which meet my condition for prolonged period of time.

From explanation above it seems that it all involves some cycle, temp table and condition to put results in yet another temp table. But I'm posting it here to check if I maybe missing some simpler approach/solution which will allow to achieve the same result?

Say I have a local database named MyDatabase. I want to move it to Azure SQL and replace an existing database currently residing there named MyDatabase. I know how to create a BACPAC file locally. I know how to import the BACPAC to my Azure storage account. However, once the BACPAC is in Azure storage, I don't know the preferred way of overwriting the existing MyDatabase database with the copy in storage. I could import the BACPAC file and create a second database, then delete the first, and rename the database just imported. However, is the best or preferred way of doing this?

First off, I'm pretty sure I've found every related post on the entire internet, so please read carefully before marking as a duplicate.

I have an ASP.NET site, hosted with IIS, using SQL Server Express. It's set to use Windows Authentication, and I'd prefer it to stay that way (although, at this point, I'm not picky). All three programs/services are on the same machine.

There are two databases: one for site logins, another for site data. The logins one works fine, yet both are set up identically.

The user in question has db_owner membership under the database in question. This is the Windows user that corresponds to the ApplicationPoolIdentity option in IIS.

Error:

Cannot open database "EDB-Content" requested by the login. The login failed. Login failed for user 'IIS APPPOOL\EDB'.

Connection strings:

name="LoginDB" connectionString="Data Source=.\SQLEXPRESS;Initial Catalog=aspnetdb;Integrated Security=True;User Instance=True" providerName="System.Data.SqlClient"

name="ExcDbContext" connectionString="Data Source=.\SQLEXPRESS;Initial Catalog=EDB-Content;Integrated Security=True;User Instance=True" providerName="System.Data.SqlClient"

Stack trace:

https://pastebin.com/embed_iframe/RQN3uwUM


I added the login/user via SSMS, which had no problems with the space. I don't even know how I'd add quotes/brackets.

I have checked the SQL Server error log before, but this time I dug deeper. I restarted SQL Server and checked to see if everything is starting up correctly. It seems the EDB-Content database never gets started:

log extract

Any idea what's going wrong there?

In sys.databases the state_desc of that database is ONLINE.

The article Allowing IIS 7.5 Applications to communicate with SQL server via Windows Authentication by Giles Middleton details exactly as I've already done.

IIS APPPOOL\EDB is the Windows user.


Non-exhaustive list of things I've tried: (will update as I recall/try things)

  • Confirmed that the login can connect to the server (GRANT CONNECT TO SQL <login>)
  • Added and removed the login from database and global levels
  • Restarted just about everything that can be restarted (including
  • Wrapped the DB name in quotes in connection string (&quot;)
  • See update below
  • Confirmed that "LoginDB" (aspnetdb.mdf) is indeed being read/written to

Update

I've narrowed down the problem, but not why it's a problem. If I detach and delete the database and change the Initial Catalog and AttachDbFilename values in the connection string, a new database is generated next time I run the website that works fine, even down to the database file being in App_Data as it's supposed to. However, oddly, it is not attached to SQL Server. If I attach it myself, it does so without complaint, but subsequent runs of the website generate the above error. If I detach the DB, the website still throws an exception. The only fix I've found is to create a whole new database by changing Initial Catalog and AttachDbFilename.

At this point my connection strings are as follows:

name="LoginDB" connectionString="Data Source=.\SQLEXPRESS;Initial Catalog=aspnetdb;Integrated Security=True;User Instance=True" providerName="System.Data.SqlClient"

name="ExcDbContext" connectionString="Data Source=.\SQLEXPRESS;Initial Catalog=contentdb;AttachDbFilename=|DataDirectory|\contentdb.mdf;Integrated Security=True;User Instance=True" providerName="System.Data.SqlClient"

We have this one user who uses SSMS to connect to a 3 node 2 instance fail over cluster. He also uses SSIS, Excel, SharePoint, SSAS etc.. to connect to this database instance I'll call TEST. There are a couple hundred other users of this instance.

His domain account gets locked out about 1-2 days after we have an instance failover. This has been happening for about a year. We generally failover about once a month but sometimes not for a couple months. He gets his password reset and everything is fine until the next failover.

I'm assuming he has entered his password in somewhere and it has since expired. He changed it but never updated the spot he saved it. I can't find that spot for the life of me and the user is getting upset because I keep asking the same question.

We have verified using our syslog server that the lockout is coming from that SQL node that runs the TEST instance. It changes as the instance floats around the cluster so we are sure it is something in the TEST instance. The database engine and SSAS run as part of the TEST cluster service. SSISDB is fairly heavily used on that instance.

I checked all the agent job owners, he doesn't own any of those. I checked the SSISDB parameters list but didn't find is account in there. I'm not really sure if the constrained delegation we have setup for reporting services could trigger the lockout to come from the database server but maybe...

Any other spots I could check for saved credentials in SQL?

My production server has MariaDB 10.0 installed.

We are migrating the database into a new server which will have MariaDB 10.1 installed.

To minimize the downtime, we have already installed 10.1 straight away, so we don't have to do the upgrade right after we have moved the database from one server to another.

Is it possible to backup the database using xtrabackup in the first server, and restore the backup in the newer server, even though the version of MariaDB is different?

Actually I would like to find answers for the following situation: Let's say I have a stored procedure (Usp1) which at some moment calls an other stored procedure (usp2). Each of the stored procedures is a transaction. So there are nested transactions.

  1. Can a deadlock occur in nested transactions while modifying the same table from 2 different sql server sessions?
  2. Are there any possibilities of conflicts which leads to deadllock between the inner and outer transactions of the same stored procedure in a single session?

Examples would be really appreciated.

I have a postgresql v8.4 database. I need to update several hundred tables with about 3 million rows of data each.

The customer wants a new column added (which I added) and now I'm trying to update the new column with data.

I created some SQL statements to do the update. This works if I run it manually in pgAdmin (just the sql statements); so I tried putting this into a function to automate the UPDATE, like so:

CREATE or REPLACE function update_ee() RETURN void as
$$
DECLARE
  row RECORD; 
BEGIN
   FOR row IN SELECT tablename FROM pg_tables WHERE schemaname = 'public' and tablename like '%_2015060%' order by tablename asc;
   LOOP
   RAISE LOG 'removing index for %', row.tablename;
   -- remove indices
   EXECUTE 'DROP INDEX ' || row.tablename || '_start_idx;';

   RAISE LOG 'starting update on %', row.tablename; 
   -- update table
   EXECUTE 'UPDATE ' || row.tablename || ' set ee = array_to_string(regexp_matches(xml_data, ''<EE_ID>(.+?)</EE_ID>''), '';'') where elnot is NULL and xml_data is not null;'; 

   RAISE LOG 'restoring index on %', row.tablename; 
   -- put indices back
   EXECUTE 'CREATE INDEX ' || row.tablename || '_start_idx on ' || row.tablename || ' USING btree(start);'; 
   END LOOP;
  RETURN;
 END;
$$
LANGUAGE plpgsql;

So the function runs; the updates tend to slow down a bit with each successive table; the log statements all appear in the log file. It appears to be working; but upon finish not one table has been updated.

What did I do wrong?

Is it not possible to update tables from a function? Is the language wrong? Should I use SQL instead of plsql?

Thanks in advance...

I read that InnoDB is supposed to be better at writing into MySQL than MyISAM, however, I'm trying it out, by running 3000 MySQL queries(Inserting rows, 8 Columns + id)... but results haven't turned out how I'd hoped.
Using InnoDB the execution time is around 5.3 Seconds while for MyISAM it takes around 2.2 Seconds.

My current mysql configuration is:

innodb_io_capacity = 8000
innodb_read_io_threads = 64
innodb_write_io_threads = 64
innodb_log_buffer_size = 32M
innodb_log_file_size = 564M
innodb_buffer_pool_size = 6G
innodb_buffer_pool_instances = 6
innodb_thread_concurrency = 0

Everything else is at its default value.
My server is currently running: Ubuntu 14.04 x64
Has 12GB RAM
6 Cores
And using an SSD (service provider says it provides 10,000 IOPS)

What would be the best configuration for InnoDB if I were to only use InnoDB, and what would be the best configuration for just MyISAM (like would it for some reason be faster if I were to disable InnoDB while not using it.)?

Right now just thinking of using my server for storing stuff, and will need to write more data, than reading it... while later on, thinking of reading more than writing.
Not sure if it matters, but right now running nginx (do not have apache on the server), php 5.6 and MySQL 5.6.19.

I just recently started using MAMP, PHPmyAdmin, and wordpress. When I try to restart the server in MAMP PRO by pressing the server icon in top right corner (mamp pro), I get a pop up that reads the error: "Starting MySQL failed. MySQl couldn't be started. Please check the log files for more information.

The log files read the following:

170525 19:13:52 mysqld_safe Logging to 
'/Applications/MAMP/logs/mysql_error.log'.
170525 19:13:53 mysqld_safe Starting mysqld daemon with databases 
from /Library/Application Support/appsolute/MAMP PRO/db/mysql56
2017-05-25 19:13:53 0 [Warning] TIMESTAMP with implicit DEFAULT value 
is deprecated. Please use --explicit_defaults_for_timestamp server 
option (see documentation for more details).
2017-05-25 19:13:53 0 [Note] --secure-file-priv is set to NULL. 
Operations related to importing and exporting data are disabled
2017-05-25 19:13:53 0 [Note] /Applications/MAMP/Library/bin/mysqld 
(mysqld 5.6.35) starting as process 9486 ...
2017-05-25 19:13:53 9486 [Warning] Setting lower_case_table_names=2 
because file system for /Library/Application Support/appsolute/MAMP 
PRO/db/mysql56/ is case insensitive
2017-05-25 19:13:53 9486 [Note] Plugin 'FEDERATED' is disabled.
2017-05-25 19:13:53 9486 [Note] InnoDB: Using atomics to ref count 
buffer pool pages
2017-05-25 19:13:53 9486 [Note] InnoDB: The InnoDB memory heap is 
disabled
2017-05-25 19:13:53 9486 [Note] InnoDB: Mutexes and rw_locks use GCC 
atomic builtins
2017-05-25 19:13:53 9486 [Note] InnoDB: Memory barrier is not used
2017-05-25 19:13:53 9486 [Note] InnoDB: Compressed tables use zlib 
1.2.8
2017-05-25 19:13:53 9486 [Note] InnoDB: Using CPU crc32 instructions
2017-05-25 19:13:53 9486 [Note] InnoDB: Initializing buffer pool, 
size = 128.0M
2017-05-25 19:13:53 9486 [Note] InnoDB: Completed initialization of 
buffer pool
2017-05-25 19:13:53 9486 [Note] InnoDB: Highest supported file format 
is Barracuda.
2017-05-25 19:13:53 9486 [Note] InnoDB: The log sequence numbers 0 
and 0 in ibdata files do not match the log sequence number 11639103 
in the ib_logfiles!
2017-05-25 19:13:53 9486 [Note] InnoDB: Database was not shutdown 
normally!
2017-05-25 19:13:53 9486 [Note] InnoDB: Starting crash recovery.
2017-05-25 19:13:53 9486 [Note] InnoDB: Reading tablespace 
information from the .ibd files...
2017-05-25 19:13:53 9486 [ERROR] InnoDB: Attempted to open a 
previously opened tablespace. Previous tablespace 
mysql/innodb_index_stats uses space ID: 2 at filepath: 
./mysql/innodb_index_stats.ibd. Cannot open tablespace 
wp_demo/wp_usermeta which uses space ID: 2 at filepath: . 
/wp_demo/wp_usermeta.ibd
2017-05-25 19:13:53 7fff9ac213c0  InnoDB: Operating system error 
number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
InnoDB: If you are installing InnoDB, remember that you must create
InnoDB: directories yourself, InnoDB does not create them.
InnoDB: Error: could not open single-table tablespace file 
./wp_demo/wp_usermeta.ibd
InnoDB: We do not continue the crash recovery, because the table may 
become
InnoDB: corrupt if we cannot apply the log records in the InnoDB log 
to it.
InnoDB: To fix the problem and start mysqld:
InnoDB: 1) If there is a permission problem in the file and mysqld 
cannot
InnoDB: open the file, you should modify the permissions.
InnoDB: 2) If the table is not needed, or you can restore it from a 
backup,
InnoDB: then you can remove the .ibd file, and InnoDB will do a 
normal
InnoDB: crash recovery and ignore that table.
InnoDB: 3) If the file system or the disk is broken, and you cannot 
remove
InnoDB: the .ibd file, you can set innodb_force_recovery > 0 in 
my.cnf
InnoDB: and force InnoDB to continue crash recovery here.
170525 19:13:53 mysqld_safe mysqld from pid file 
/Applications/MAMP/tmp/mysql/mysql.pid ended
170525 19:16:02 mysqld_safe Logging to 
'/Applications/MAMP/logs/mysql_error.log'.
170525 19:16:02 mysqld_safe Starting mysqld daemon with databases 
from /Library/Application Support/appsolute/MAMP PRO/db/mysql56
2017-05-25 19:16:03 0 [Warning] TIMESTAMP with implicit DEFAULT value 
is deprecated. Please use --explicit_defaults_for_timestamp server 
option (see documentation for more details).
2017-05-25 19:16:03 0 [Note] --secure-file-priv is set to NULL. 
Operations related to importing and exporting data are disabled
2017-05-25 19:16:03 0 [Note] /Applications/MAMP/Library/bin/mysqld 
(mysqld 5.6.35) starting as process 10315 ...
2017-05-25 19:16:03 10315 [Warning] Setting lower_case_table_names=2 
because file system for /Library/Application Support/appsolute/MAMP 
PRO/db/mysql56/ is case insensitive
2017-05-25 19:16:03 10315 [Note] Plugin 'FEDERATED' is disabled.
2017-05-25 19:16:03 10315 [Note] InnoDB: Using atomics to ref count 
buffer pool pages
2017-05-25 19:16:03 10315 [Note] InnoDB: The InnoDB memory heap is 
disabled
2017-05-25 19:16:03 10315 [Note] InnoDB: Mutexes and rw_locks use GCC 
atomic builtins
2017-05-25 19:16:03 10315 [Note] InnoDB: Memory barrier is not used
2017-05-25 19:16:03 10315 [Note] InnoDB: Compressed tables use zlib 
1.2.8
2017-05-25 19:16:03 10315 [Note] InnoDB: Using CPU crc32 instructions
2017-05-25 19:16:03 10315 [Note] InnoDB: Initializing buffer pool, 
size = 128.0M
2017-05-25 19:16:03 10315 [Note] InnoDB: Completed initialization of 
buffer pool
2017-05-25 19:16:03 10315 [Note] InnoDB: Highest supported file 
format is Barracuda.
2017-05-25 19:16:03 10315 [Note] InnoDB: The log sequence numbers 0 
and 0 in ibdata files do not match the log sequence number 11639103 
in the ib_logfiles!
2017-05-25 19:16:03 10315 [Note] InnoDB: Database was not shutdown 
normally!
2017-05-25 19:16:03 10315 [Note] InnoDB: Starting crash recovery.
2017-05-25 19:16:03 10315 [Note] InnoDB: Reading tablespace 
information from the .ibd files...
2017-05-25 19:16:03 10315 [ERROR] InnoDB: Attempted to open a 
previously opened tablespace. Previous tablespace 
mysql/innodb_index_stats uses space ID: 2 at filepath: 
./mysql/innodb_index_stats.ibd. Cannot open tablespace 
wp_demo/wp_usermeta which uses space ID: 2 at filepath: 
./wp_demo/wp_usermeta.ibd
2017-05-25 19:16:03 7fff9ac213c0  InnoDB: Operating system error 
number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
InnoDB: If you are installing InnoDB, remember that you must create
InnoDB: directories yourself, InnoDB does not create them.
InnoDB: Error: could not open single-table tablespace file 
./wp_demo/wp_usermeta.ibd
InnoDB: We do not continue the crash recovery, because the table may 
become
InnoDB: corrupt if we cannot apply the log records in the InnoDB log 
to it.
InnoDB: To fix the problem and start mysqld:
InnoDB: 1) If there is a permission problem in the file and mysqld 
cannot
InnoDB: open the file, you should modify the permissions.
InnoDB: 2) If the table is not needed, or you can restore it from a 
backup,
InnoDB: then you can remove the .ibd file, and InnoDB will do a 
normal
InnoDB: crash recovery and ignore that table.
InnoDB: 3) If the file system or the disk is broken, and you cannot 
remove
InnoDB: the .ibd file, you can set innodb_force_recovery > 0 in 
my.cnf
InnoDB: and force InnoDB to continue crash recovery here.
170525 19:16:03 mysqld_safe mysqld from pid file 
/Applications/MAMP/tmp/mysql/mysql.pid ended
170525 19:17:53 mysqld_safe Logging to 
'/Applications/MAMP/logs/mysql_error.log'.
170525 19:17:53 mysqld_safe Starting mysqld daemon with databases 
from /Library/Application Support/appsolute/MAMP PRO/db/mysql56
2017-05-25 19:17:53 0 [Warning] TIMESTAMP with implicit DEFAULT value 
is deprecated. Please use --explicit_defaults_for_timestamp server 
option (see documentation for more details).
2017-05-25 19:17:53 0 [Note] --secure-file-priv is set to NULL. 
Operations related to importing and exporting data are disabled
2017-05-25 19:17:53 0 [Note] /Applications/MAMP/Library/bin/mysqld 
(mysqld 5.6.35) starting as process 11086 ...
2017-05-25 19:17:53 11086 [Warning] Setting lower_case_table_names=2 
because file system for /Library/Application Support/appsolute/MAMP 
PRO/db/mysql56/ is case insensitive
2017-05-25 19:17:53 11086 [Note] Plugin 'FEDERATED' is disabled.
2017-05-25 19:17:53 11086 [Note] InnoDB: Using atomics to ref count 
buffer pool pages
2017-05-25 19:17:53 11086 [Note] InnoDB: The InnoDB memory heap is 
disabled
2017-05-25 19:17:53 11086 [Note] InnoDB: Mutexes and rw_locks use GCC 
atomic builtins
2017-05-25 19:17:53 11086 [Note] InnoDB: Memory barrier is not used
2017-05-25 19:17:53 11086 [Note] InnoDB: Compressed tables use zlib 
1.2.8
2017-05-25 19:17:53 11086 [Note] InnoDB: Using CPU crc32 instructions
2017-05-25 19:17:53 11086 [Note] InnoDB: Initializing buffer pool, 
size = 128.0M
2017-05-25 19:17:53 11086 [Note] InnoDB: Completed initialization of 
buffer pool
2017-05-25 19:17:53 11086 [Note] InnoDB: Highest supported file 
format is Barracuda.
2017-05-25 19:17:53 11086 [Note] InnoDB: The log sequence numbers 0 
and 0 in ibdata files do not match the log sequence number 11639103 
in the ib_logfiles!
2017-05-25 19:17:53 11086 [Note] InnoDB: Database was not shutdown 
normally!
2017-05-25 19:17:53 11086 [Note] InnoDB: Starting crash recovery.
2017-05-25 19:17:53 11086 [Note] InnoDB: Reading tablespace 
information from the .ibd files...
2017-05-25 19:17:53 11086 [ERROR] InnoDB: Attempted to open a 
previously opened tablespace. Previous tablespace 

When I use mysqldump to export mysql database, it always produce a dump.sql containing

...some other things...
/*!40101 SET character_set_client = utf8*/;
...some other things...

This is the mysqldump command I use:

mysqldump -u root -p databaseName -R -E --single-transaction --default-character-set=utf8mb4 > dump.sql

The charset of the mysql database is utf8mb4 rather than utf8, the characters related variables is:

+--------------------------+----------------------------+
| Variable_name            | Value                      |
+--------------------------+----------------------------+
| character_set_client     | utf8mb4                    |
| character_set_connection | utf8mb4                    |
| character_set_database   | utf8mb4                    |
| character_set_filesystem | binary                     |
| character_set_results    | utf8mb4                    |
| character_set_server     | utf8mb4                    |
| character_set_system     | utf8                       |
| character_sets_dir       | /usr/share/mysql/charsets/ |
| collation_connection     | utf8mb4_unicode_ci         |
| collation_database       | utf8mb4_unicode_ci         |
| collation_server         | utf8mb4_unicode_ci         |
+--------------------------+----------------------------+

Why does mysqldump always add /*!40101 SET character_set_client = utf8*/ rather than /*!40101 SET character_set_client = utf8mb4*/?

What happens if /*!40101 SET character_set_client = utf8*/ is used?

Can we have mysqldump use /*!40101 SET character_set_client = utf8mb4*/?

I've got a query that runs in about 1.6 seconds and returns 150Kish rows. I have been building a memory engine based temp table from this query and this operation takes about 6.5 seconds. I've sifted through lots of documentation on the mysql site as well as futzed with indexes on the temp table (hardly any difference at all when I have indexes vs not having them) trying to find some way for the table creation to be a bit better performing. Does anybody have any suggestions with respect to config options that may speed up creating an in memory temp table (or non temp table) ?

The create syntax I'm using is below...

CREATE TEMPORARY TABLE 
IF NOT EXISTS myTempTable 
    (ugID INT,stID INT, INDEX st (`st`) ) 
ENGINE=MEMORY AS 
select 
    ugID, 
    stID 
from 
    a_complex_query_that_runs_in_under_2_seconds_but_takes_6_to_write_to_a_temptable_with_150k_rows 

Here is an explain from the query that feeds the temp table...

id  select_type table   type    possible_keys   key key_len ref rows    Extra
1   PRIMARY <derived15> ALL NULL    NULL    NULL    NULL    559 
1   PRIMARY creatorEntity   eq_ref  PRIMARY PRIMARY 4   creatorAssignment.OwnerObjectID 1   Using index
1   PRIMARY <derived2>  ALL NULL    NULL    NULL    NULL    152580  Using where; Using join buffer
15  DERIVED VMAssignment    ref AssociatedObjectID,AssociatedObject AssociatedObject    2       2902    Using where
2   DERIVED test_   range   AssignmentValue,AssociatedObjectID,AssociatedObject,assignmentQuery AssociatedObjectID  4   NULL    214 Using where; Using temporary; Using filesort
2   DERIVED cfv_    ref CustomFieldID,EntityID,CFEntity,CFValue,CFEntityValue   CustomFieldID   4   testCloud.test_.AssignmentValue 4232    Using where
2   DERIVED ent_    ref VMStoreID   VMStoreID   4   testCloud.cfv_.EntityID 1   Using index
3   UNION   test_   ref AssociatedObjectID,AssociatedObject AssociatedObject    2       2902    Using where; Using temporary; Using filesort
3   UNION   cfv_    ref CustomFieldID,EntityID,CFEntity,CFValue,CFEntityValue   CFValue 772 func,func   4   Using where
3   UNION   ent_    ref VMStoreID   VMStoreID   4   testCloud.cfv_.EntityID 1   Using index
4   UNION   test_   const   PRIMARY PRIMARY 4       1   Using index
5   UNION   test_   index_merge AssociatedObjectID,AssociatedObject AssociatedObjectID,AssociatedObject 4,6 NULL    1   Using intersect(AssociatedObjectID,AssociatedObject); Using where; Using index; Using temporary; Using filesort
5   UNION   entity_ ref PRIMARY,ClassName   ClassName   2       23326   Using where
5   UNION   ent_    ref VMStoreID   VMStoreID   4   testCloud.entity_.ID    1   Using index
6   UNION   entity_ ref PRIMARY,ClassName   ClassName   2       23326   Using where; Using temporary; Using filesort
6   UNION   ent_    ref VMStoreID   VMStoreID   4   testCloud.entity_.ID    1   Using index
6   UNION   test_   ref AssignmentValue,AssociatedObjectID,AssociatedObject,assignmentQuery AssignmentValue 768 testCloud.entity_.State 1   Using where
7   UNION   entity_ ref PRIMARY,ClassName   ClassName   2       23326   Using where; Using temporary; Using filesort
7   UNION   ent_    ref VMStoreID   VMStoreID   4   testCloud.entity_.ID    1   Using index
7   UNION   test_   ref AssignmentValue,AssociatedObjectID,AssociatedObject,assignmentQuery AssignmentValue 768 testCloud.entity_.Zip   1   Using where
8   UNION   test_   range   AssociatedObjectID,AssociatedObject AssociatedObjectID  4   NULL    150 Using where; Using temporary; Using filesort
8   UNION   entity_ ref PRIMARY,ClassName,Address   Address 456 func,func   4   Using where
8   UNION   ent_    ref VMStoreID   VMStoreID   4   testCloud.entity_.ID    1   Using index
9   UNION   test_   range   AssignmentValue,AssociatedObjectID,AssociatedObject,assignmentQuery AssociatedObjectID  4   NULL    203 Using where; Using temporary; Using filesort
9   UNION   ent_    ref VMStoreID   VMStoreID   4   testCloud.test_.AssignmentValue 1   Using where; Using index
10  UNION   test_   const   PRIMARY PRIMARY 4       1   Using index
11  UNION   test_   ref AssociatedObjectID,AssociatedObject AssociatedObjectID  4       1   Using where; Using temporary; Using filesort
11  UNION   entity_ ref PRIMARY,ClassName   ClassName   2       23326   Using where
11  UNION   ent_    ref VMStoreID   VMStoreID   4   testCloud.entity_.ID    1   Using index
12  UNION   test_   ref AssignmentValue,AssociatedObjectID,AssociatedObject,assignmentQuery AssociatedObject    2       2902    Using where; Using temporary; Using filesort
12  UNION   ent_    ref VMStoreID   VMStoreID   4   testCloud.test_.AssignmentValue 1   Using where; Using index
13  UNION   test_   range   AssignmentValue,AssociatedObjectID,AssociatedObject,assignmentQuery AssociatedObjectID  4   NULL    239 Using where; Using temporary; Using filesort
13  UNION   list_stores ref VMListID,VMStoreID,VMStoreID_2,VMStoreID_3,VMStoreID_4,VMStoreID_5,VMStoreID_6,VMStoreID_7,VMStoreID_8,VMStoreID_9,VMStoreID_10,VMStoreID_11,VMStoreID_12,VMStoreID_13,VMStoreID_14,VMStoreID_15,VMStoreID_16   VMListID    4   testCloud.test_.AssignmentValue 318 Using where
13  UNION   ent_    ref VMStoreID   VMStoreID   4   testCloud.list_stores.VMStoreID 1   Using index
14  UNION   test_   range   AssignmentValue,AssociatedObjectID,AssociatedObject,assignmentQuery AssociatedObjectID  4   NULL    70  Using where; Using temporary; Using filesort
14  UNION   sto_    ref PRIMARY,RetailerID  RetailerID  4   testCloud.test_.AssignmentValue 63  Using where; Using index
14  UNION   ent_    ref VMStoreID   VMStoreID   4   testCloud.sto_.ID   1   Using index
NULL    UNION RESULT    <union2,3,4,5,6,7,8,9,10,11,12,13,14>   ALL NULL    NULL    NULL    NULL    NULL    

I recently came across the article Locking and Performance. It has following statements

If we manually monitor locking activity while this query executes, we see that without the page locking granularity hint, SQL Server acquires and releases over half a million row level locks while seeking the clustered index.

The performance of this query can be improved to 320 ms simply by adding a page granularity locking hint to the Sales table

I am using SQL Server 2012. For identifying the locks, I referred MSDN: How to: Find the Objects That Have the Most Locks Taken on Them. And I used a workload query that joins two tables and selects 1001000 rows. Still it is not showing any locks in the output. It is hard to believe that there is no lock happening.

  1. Why is it not showing any locks? How to correct this?
  2. Is there a better way to identify the type and count of locks?

References - Updated:

  1. Which Locks Count Toward Lock Escalation? - Kendra Little

  2. Use Extended Events to capture Deadlocks in SQL Server

  3. Tracking SQL Server Database Usage - Jonathan Kehayias

Extended Event with my Workload

IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='LockCounts')
DROP EVENT session LockCounts ON SERVER
GO
DECLARE @dbid int

SELECT @dbid = db_id('My_DW')

DECLARE @sql nvarchar(1024)
SET @sql = '
CREATE event session LockCounts ON SERVER
ADD EVENT sqlserver.lock_acquired (WHERE database_id =' + CAST(@dbid AS nvarchar) +')
ADD TARGET package0.synchronous_bucketizer ( 
SET filtering_event_name=''sqlserver.lock_acquired'', source_type=0, source=''resource_0'')'

EXEC (@sql)
GO
ALTER EVENT session LockCounts ON SERVER 
STATE=start
GO


----------------------------- Create a simple workload that takes locks.------------------------------------------------
-- ---------------------------------------------------------------------------------------------------------------------
USE My_DW
GO

SELECT TOP 1001000 *
FROM Fact_Appointment F
INNER JOIN Dim_AppointmentType D
    ON  F.AppointmentType_Key = D.AppointmentType_Key

GO

-------------------------------------------------------------------------------------------------------------------------
SELECT name, object_id, lock_count FROM 
(SELECT objstats.value('.','bigint') AS lobject_id, 
objstats.value([email protected]', 'bigint') AS lock_count
FROM (
SELECT CAST(xest.target_data AS XML)
LockData
FROM sys.dm_xe_session_targets xest
JOIN sys.dm_xe_sessions xes ON xes.address = xest.event_session_address
JOIN sys.server_event_sessions ses ON xes.name = ses.name
WHERE xest.target_name = 'synchronous_bucketizer' AND xes.name = 'LockCounts'
) Locks
CROSS APPLY LockData.nodes('//BucketizerTarget/Slot') AS T(objstats)
 ) LockedObjects 
INNER JOIN sys.objects o
ON LockedObjects.lobject_id = o.object_id
--WHERE o.type != 'S' AND o.type = 'U'
ORDER BY lock_count desc
GO
-- 
-- Stop the event session.
-- 
ALTER EVENT SESSION LockCounts ON SERVER
state=stop
GO

I'm getting started with databases in Access for a work project (technically a work-learn for my non-CompSci degree). I have experience using multiple filters on multiple columns through Access/Excel, but for my work learn I'm trying to design an interface with some "quick filter" functionality built into the UI. Since I'm pretty green, I'm using macros instead of VBA which seems to be doing the job fine... But I've hit a snag, and was hoping y'all might be able to help?

I have a split-view form with top containing record details and quick search options, and the bottom containing the table itself as filtered (which serves as a list of results). What I'm trying to do is filter a column by the selections from two dropdowns on a form.

I was able to do this by doing:

ApplyFilter WHERE Table!Articles!IntCodes LIKE "*"IntCombo1 & "*" & IntCombo2 "*"

But then I realized that while that will find entries where IntCombo1 and IntCombo2 appear in order, (e.g. Dog, Cat) it won't show ones where they appear reversed (Cat, Dog).

I would think that the way to fix that would be one of:

ApplyFilter WHERE Table!Articles!IntCodes LIKE "*"IntCombo1 & "*" & IntCombo2 "*" OR "*"IntCombo2 & "*" & IntCombo1 "*"

or

ApplyFilter WHERE Table!Articles!IntCodes LIKE ("*"IntCombo1 & "*" & IntCombo2 "*") OR ("*"IntCombo2 & "*" & IntCombo1 "*")

or

 ApplyFilter WHERE Table!Articles!IntCodes LIKE ("*"IntCombo1 & "*" & IntCombo2 "*" OR "*"IntCombo2 & "*" & IntCombo1 "*")

But none of these work, and I'm not sure why because they all seem pretty boolean-ly sound. Can someone tell me what I'm doing wrong?

(Other perhaps relevant info is that the entries on the table I'm filtering are short text words/phrases separated by semicolons ("Dog; Cat; Mouse; Horse", "Mouse", "Cat; Dog", etc...). I totally own that this should probably be converted into a table (long term goal), but the dataset was created over 5 years in an Excel spreadsheet, and therefore is a beast so I haven't gotten around to doing that yet.)

If I have an AG with 4 different nodes as below:

  1. Primary Node
  2. Secondary Node 1 - Asynchronous Commit - Read Only Yes
  3. Secondary Node 2 - Synchronous Commit - Read Only No
  4. Secondary Node 3 - Synchronous Commit - Read Only Yes

In this case, if a connection to secondary node is required from an application, will the SQL Server route it to Node 1 or Node 3 ? Node 2 is out of question because it's Read Only is set to NO, isn't it ?

Ideally it should be routed to Node 3 because it has synchronous commit and data will be more consistent with the primary. Is that's what will happen here?

  1. Source is one service pack behind destination (2012). I know service packs can be issues here, but it looks like this is only for versions 2000 and 2005.
  2. When using sp_helptext sp_MSupd_PROC the objects uses a CASE WHEN with the binary input (last parameter) and updates or adds columns with the other parameters where the primary key column is equal to the primary key passed in. For non-SQL types reading this - is there an Entity Framework approach to this, as this entire replication process looks slow and completely outdated and I highly doubt is using efficient .NET code?
  3. Using sp_help on the objects on both the source and destination show that they're both the same. Also, I did look at this briefly, but the no specific transaction number comes with the error - so I can't do his second step.
  4. I did a quick verification on the data in the table, confirming that it matches with the parameters shown from the sp_helptext.

I haven't found anything else I can do to troubleshoot here and updating service packs right now isn't an option. Note that this replication was fine yesterday and the day before yesterday, so what would cause it to be fine for multiple days, but now not be fine. I realize that if users change or update the source or destination object, that would cause problems, but I don't see that being the case.

Thanks.

I got a question from our product team to build a "Table availability of customer service representatives (user)". f

For example : user_A was available to accept a call from 12:00 to 12:03, so I have in the database a table with UserName = user_A,StartDatetime = 20151110 12:00,EndDateTime = 20151110 12:03.

From this data I need to create an availability report that can show which users were available in minute interval (every minute) along the all day .