There is a table on my database that stores a path for video files used by my website. Those files are stored in a SSD disk.

path character varying(255)
id serial

The path will always be like this:

/files/client/\d+/(attachment|user)/\d+/(main|thumbnail)

Here are some examples of valid paths:

/files/client/743052/attachment/2123598/main
/files/client/743052/attachment/2123598/thumbnail
/files/client/1475296/user/3541234/main
/files/client/1475296/user/3541234/thumbnail

Question: How can I create a Trigger that checks the value WHEN INSERTED and, if the row is less than the minimal path (/files/client/\d+/(attachment|user)/\d+/(main|thumbnail)) then raise an exception?

I'm using PostgreSQL 9.1.

UPDATE:

Some more examples here:

/files/client/45345/user/3542341234/main -- PASS
/files/client/45345/user/3542341234/thumbnail -- PASS
/files/client/253623/attachment/35334/main -- PASS
/files/client/253623/attachment/35334/thumbnail -- PASS
/files/client/45312341245/users/12545/main -- WRONG!
/files/client/45312341245/users/12545/thumbnail -- WRONG!
/files/client/45345/attachment/1223545/mains -- WRONG!
/files/client/45345/attachment/1223545/thumbnails -- WRONG!

Am trying to optimzie a procedure. There are 3 different update queries present in the procedure.

update #ResultSet
set MajorSector = case 
        when charindex('  ', Sector) > 2 then rtrim(ltrim(substring(Sector, 0, charindex('  ', Sector)))) 
            else ltrim(rtrim(sector)) 
        end

update #ResultSet
set MajorSector = substring(MajorSector, 5, len(MajorSector)-4)
where left(MajorSector,4) in ('(00)','(01)','(02)','(03)','(04)','(05)','(06)','(07)','(08)','(09)')

update #ResultSet
set MajorSector = substring(MajorSector, 4, len(MajorSector)-3)
where left(MajorSector,3) in ('(A)','(B)','(C)','(D)','(E)','(F)','(G)','(H)','(I)','(J)','(K)','(L)','(M)','(N)','(O)','(P)','(Q)','(R)','(S)','(T)','(U)','(V)','(W)','(X)','(Y)','(Z)')

To complete all three update queries it takes less than 10 seconds.

Execution plan for all three update queries.

https://www.brentozar.com/pastetheplan/?id=r11BLfq7b

What I planned is to change the three different update queries into one single update query, so that the IO can be reduced.

;WITH ResultSet
     AS (SELECT CASE
                  WHEN LEFT(temp_MajorSector, 4) IN ( '(00)', '(01)', '(02)', '(03)', '(04)', '(05)', '(06)', '(07)', '(08)', '(09)' ) 
                      THEN Substring(temp_MajorSector, 5, Len(temp_MajorSector) - 4)
                  WHEN LEFT(temp_MajorSector, 3) IN ( '(A)', '(B)', '(C)', '(D)','(E)', '(F)', '(G)', '(H)','(I)', '(J)', '(K)', '(L)','(M)', '(N)', '(O)', '(P)','(Q)', '(R)', '(S)', '(T)','(U)', '(V)', '(W)', '(X)','(Y)', '(Z)' ) 
                      THEN Substring(temp_MajorSector, 4, Len(temp_MajorSector) - 3)
                  ELSE temp_MajorSector
                END AS temp_MajorSector,
                MajorSector
         FROM   (SELECT temp_MajorSector = CASE
                                             WHEN Charindex('  ', Sector) > 2 THEN Rtrim(Ltrim(Substring(Sector, 0, Charindex('  ', Sector))))
                                             ELSE Ltrim(Rtrim(sector))
                                           END,
                        MajorSector
                 FROM   #ResultSet)a)
UPDATE ResultSet
SET    MajorSector = temp_MajorSector  

But this takes around 1 minute to complete. I checked the execution plan, It is identical as first update query.

execution plan for above query

https://www.brentozar.com/pastetheplan/?id=SJvttz9QW

Can somebody explain why it is slow

Login failed. The login is from an untrusted domain and cannot be used with Windows authentication.

Error: 18452, Severity: 14, State: 1.

SSPI handshake failed with error code 0x8009030c, state 14 while establishing a connection with integrated security; the connection has been closed. Reason: AcceptSecurityContext failed. The Windows error code indicates the cause of failure.

Error: 17806, Severity: 20, State: 1

How do I fix that login failed issue?

I have two production instance of MySQL on AWS RDS that failed with: 2017-06-20T15:48:35.226833Z 0 [ERROR] Error log throttle: 1 'Can't create thread to handle new connection' error(s) suppressed 2017-06-20T15:48:35.226873Z 0 [ERROR] Can't create thread to handle new connection(errno= 11)

These are two db.m4.2xlarge instances and one capped at 102 connections while the other capped at 38. Not more connections could be made past that. This type of instance should be able to accept more than 2K connections with the default config (which I have).

I opened a support ticket and they manually changed some settings on the instances. After that and a reboot, I could connect with 2500 connections on each server to validate resolution.

Any idea why they had to manually change some settings on the instance (underlying OS I guess)? Is this something to be expected from RDS? What could we be doing that triggered this condition?

Thanks!

I'm in an urgent need of restoring a backup dump taken three weeks ago which was not completed successfully. I'm using Oracle 11g database in nonarchive log mode. The export command used was as follows.

expdp "[email protected] AS SYSDBA" DUMPFILE=Backup_01022017.dmp LOGFILE=BakupLog_01022017.log FULL=y

The backup log shows that the process terminated with the following error.

ORA-39097: Data Pump job encountered unexpected error -1089
ORA-39065: unexpected master process exception in MAIN
ORA-01089: immediate shutdown in progress - no operations are permitted

But, fortunately the necessary tables are seen as exported. Are there any ways by which I can restore these tables or can I import these tables?

I am working on projects that require a lot of categories and subcategories and db trees and different data structures and sets. Some of them require really heavy queries. Primary, I'm a web programmer; I do front end, backend, and database work. I'm a one man army!

Previously, I did smaller web projects that just required basic database work. Now I'm moving to mid size and larger web projects. I also want to expand my professional skills. However, I find it really hard to properly understand database trees, selfreferencing tables, 2 - 3 level subcategories, with queries and sub queries.

I want to know what can be considered advanced sql programming? What do I need to know to become good with DBs? Thanks

My understanding of SQL Server Development addition is that it's free to use for development, along with all it's cool tooling (SSIS, SSAS, etc). What are the limits of this?

Actually, all I really want to know is whether I'm working within the legal boundaries. My setup is:

  • There are two Windows Server machines, both paid and licensed
  • One server (production) has a paid subscription to SQL Server
  • One server has 'SQL Server development edition (unpaid)

I use VS community edition, SSMS, SSDT, etc on my personal computer. Either I log into the production server or the development server. Everything is developed on the development server, and then pushed to production.

Am I using SQL Server correctly within it's licensing terms?

I have a Table like this:

|   ID   |  OtherID  | Data
+--------+-----------+---------------------------
|  5059  |   73831   | 5103,5107
|  5059  |   73941   | 5103,5104,5107
|  5059  |   73974   | 5103,5106,5107,5108

And the result should return individual rows, like this:

|   ID   |  OtherID  | Data
+--------+-----------+--------------------------
|  5059  |   73831   | 5103
|  5059  |   73831   | 5107
|  5059  |   73941   | 5103
|  5059  |   73941   | 5104
|  5059  |   73941   | 5107
|  5059  |   73974   | 5103
|  5059  |   73974   | 5106
|  5059  |   73974   | 5107
|  5059  |   73974   | 5108

Basically split my data at the comma into individual rows?

The result will be stored in a temporary table (Like: ID, OtherID, NewID).

My version DB2 is 9.7

With the following table in MySQL:

CREATE TABLE bob(foo ENUM('a','b','c'));

INSERT INTO bob (foo) VALUES ('a'),('b'),('c'),('a'),('a');

SELECT * FROM bob WHERE foo >= 2;
+------+
| foo  |
+------+
| b    |
| c    |
+------+

With the following table in PostGres:

CREATE TYPE stuff AS ENUM ('a', 'b', 'c');
INSERT INTO bob (foo) VALUES ('a'), ('b'), ('b'), ('c'), ('c');
SELECT * FROM bob WHERE foo > 2;
(HINT:  No operator matches the given name and argument type(s). You might need to add explicit type casts.)

I've read that there are some fancy ways of getting around this problem, but is there a way to create a Type that mimics the MySQL capabilities? Or is there a data type with the capabilities that I want? (Ability to check string on insert, but also to make where cases against an aggregate).

I have a SSIS Data Flow Task that takes CSV output (from a flat file connection) and passes that to a Script Component. When I run the task I get the following error:

Script Component has encountered an exception in user code

Project name: blahblahbla

String was not recognized as valid DateTime

at System.DateTimeParse.Parse(String s, DateTimeFormatInfo dtfi, DateTimeStyles styles)
at ScriptMain.Input0_ProcessInputRow(Input0Buffer Row)
at UserComponent.Input0_ProcessInput(Input0Buffer Buffer)
at UserComponent.ProcessInput(Int32 InputID, String InputName, PipelineBuffer Buffer, OutputNameMap OutputMap)
at Microsoft.SqlServer.Dts.Pipeline.ScriptComponent.ProcessInput(Int32 InputID, PipelineBuffer buffer)
at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.ProcessInput(Int32 inputID, PipelineBuffer buffer)

Which is interesting because I don't think there is a date column in the CSV, and I'm not using DateTime in this code at all.

When I go into the script and debug it, it runs smoothly. Then I take away the break point and it errors again. This seems to be some kind of race condition, which makes me worried about my other script components. This component was created via a copy paste - could that effect anything?

My user code is from that function is here:

public override void Input0_ProcessInputRow(Input0Buffer Row)
{
    // Create a new row
    DataRow glRow = _data.NewRow();

    glRow["xx"] = Row.xx;
    glRow["xx"] = Row.xx;
    glRow["xx"] = Row.xx;
    glRow["xx"] = Row.xx;
    glRow["xx"] = ConvertToBool(Row.xx);
    glRow["xx"] = ConvertToBool(Row.xx);

    // Add row to _data
    _data.Rows.Add(glRow);

}

And the function is:

private object ConvertToBool(string obj)
{
    if (obj.Replace(" ", "") == "") return DBNull.Value;
    return (obj == "1") ? true : false;
}

I'm trying to change a stored procedure that is used to updates data in a local and a linked server. "UPDATE [PcName].[DatabaseName].[dbo].[TableName]"

As both servers are available the stored procedure can be executed (updated) without an problem. If the linkedserver is down, I'm getting this error message:


OLE DB provider "SQLNCLI11" for linked server " linkedserver " returned message "Login timeout expired". OLE DB provider "SQLNCLI11" for linked server "linkedserver " returned message "A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.". Msg 53, Level 16, State 1, Line 0

Named Pipes Provider: Could not open a connection to SQL Server [53].

How can I still change a stored procedure while the linkedserver is down?

Scenario:

VendorID | OPTFIELD   |  VALUE  |   VDESC
---------+------------+---------+---------------------
7ELEVEN  | CHANNELAP  |   03    | Independents
7ELEVEN  | CUSTGROUP  | Metcash | Metcash
7ELEVEN  | SUBCHANNEL |   ING   | Independent Grocery

Solution Required:

Vendor_ID | level_2 | level_2_desc | level_3 |     level_3_dec     | level_4 | level_4_desc
----------+---------+--------------+---------+---------------------|---------|-------------------
7ELEVEN   |    03   | Independents |   ING   | Independent Grocery | Metcash | Metcash

So I just want to convert the rows into columns depending on a particular scenario which is based on the value of one of the Coulmns(OPTFIELD)

Example:- When OPTFIELD ='CHANNELAP' then results corresponding to 'CHANNELAP' that is 'Value' and 'VDESC' would appear in different columns as 'VALUE' in level_2 and 'VDESC' in level_2_desc

So, values in 'OPTFIELD' columns is the condition on which 'VALUE' and 'VDESC' should be moved to different Columns.

IF 'OPTFIELD' = 'CHANNELAP' then level_2(Value) and level_2_desc(VDESC) IF 'OPTFIELD' = 'CUSTGROUP' then level_3(Value) and level_3_desc(VDESC) IF 'OPTFIELD' = 'SUBCHANNEL' then level_4(Value) and level_4_desc(VDESC)

So, the final result will be 1 row for VendorID = '7ELEVEN'

VendorID | level_2 | level_2_desc |level_3 |level_3_desc |level_4 |level_4_desc ---------+---------+--------------+--------+-------------+--------+------------- 7ELEVEN | 03 | Independents |Metcash | Metcash | ING |Independent Grocery

Thanks

I have created a C# application. I have used sql server database for this application. When i run the application it communicate with the sql server and retrieve data.

My problem is that when database size is very large and I request for large data(35-40 mb) from it, it takes so much time (10-20 minutes)to get data.

Is there any way to get compressed data from server and reduce the network lag?

Unable to find the cause why the SQL Server Service was Terminated Unexpectedly? (Event ID: 7034)

I looked into the SQL Server Error Logs but could find the reason about why the SQL Server service was terminated.

So check the windows event Viewer and I found the following error msg:

Event Message : The SQL Server (MSSQLSERVER) service terminated unexpectedly. It has done this 1 time(s).

Error Event Id : 7034

Since last night my SQL server service is terminating with the this error.

This service terminates and restarts automatically and sometimes doesn't restart we have to manually start it again.

Version Details: Microsoft SQL Server 2016 (RTM) - 13.0.1601.5 (X64) Standard Edition (64-bit)

OS: Windows Server 2012 R2 Standard 6.3 (Build 9600: ) (Hyper-visor)

I am trying to create a main form with a subform as a datasheet to be used as an asset viewer. I have three buttons that I want to implement in this main form, add/edit/delete entry. The add and edit button will open a new form to add or edit entries in the datasheet. I have the add button working well, however how can I make the edit button edit the row that is selected in the subform (data table)? As it is currently, the edit button will only open with the first entry of that data table. Thanks and hopefully someone out there could help me out.

I am an amateur DB developer, working on an access database for data entry using a split form. So, I am entering data into 3 tabbed forms. However, the main form is a split form. The problem that I am facing is, the data that I had entered previously through the tables is being shown below the form but the ones that I am entering through the form are not being shown until I refresh the page. Can anyone help me with this problem? Thanks in advance!

According to documentation count and distinct queries should result in op = count and op = distinct in system.profile. However when I tried to run such queries in shell both were generated as op = command.

> db.p.count()
23
> db.p.distinct("item")
[ "pencil", "eraser" ]
> db.system.profile.find().sort({$natural: -1}).limit(2).pretty()
{
    "op" : "command",
    "ns" : "test.p",
    "command" : {
        "distinct" : "p",
        "key" : "item",
        "query" : {

        }
    },
    (...)
}
{
    "op" : "command",
    "ns" : "test.p",
    "command" : {
        "count" : "p",
        "query" : {

        },
        "fields" : {

        }
    },
    (...)
}

For remove this works perfectly fine, I execute remove and I get op = remove in system.profile.

> db.p.remove({item: "pen"})
WriteResult({ "nRemoved" : 0 })
> db.system.profile.find().sort({$natural: -1}).limit(1).pretty()
{
    "op" : "remove",
    "ns" : "test.p",
    "query" : {
        "item" : "pen"
    },
    (...)
}

I am trying to migrate my database (11.2.0.4) A to another machine by using rman backup. At present, in the source machine, the redo log file, temp file is stored under the directory foo (/foo). In the target machine, I would like to store these files under the directory bar (/bar). In the past practice, what I did is to create foo for these files and move these files to bar manually after the recovery is complete. RMAN is such a powerful tool that I think there must be some an easy way to satisfy my requirement. So far, by searching the google, I got a useful rman command for temp file:

set newname for tempfile <fileno> to '<path>'

It seems that there is no similar command for redo file. Before using this command to perform a recovery, I have several questions to ask:

  1. When we take an full backup, the backup did not go with temp tablespace and redo files, but in the past migration I experienced, I found that when I open (alter database open resetlogs)the database after recovery is complete, redo log files and temp files exist. Will rman create this automatically for database? I can`t find the reliable oracle documents for this topic.
  2. To change the path for redo is so boring, I would the finish this task in the rman scripts, is there any command available to rename the redo log file in the rman scripts?

Background

We're moving a legacy application from a non-relational database to PostgreSQL.

Problem

We'd like to enable foreign keys on a lot of fields for future data integrity and so that reporting tools can easily determine which tables are linked. Some of the current data won't pass the FK check and because of the legacy app connecting to the database we can't turn on the FKs yet.

Temporary Solution

Some of the programmers are suggesting that we disable triggers on the database as shown here so that the FKs aren't enforced but will still be there for reporting tools.

Question

I'm afraid that this will cause a bunch of other un-intended consequences but I'm not sure what they are. Can anyone put my fears to rest or explain why this is a really bad idea? I'm assuming that one reason it is bad is that it would disable all triggers not just triggers that fire the FK constraints.

I inherited a SQL Server 2005 3rd party server and database to look into to be able to generate reports and dashboards on.

On one table, I found a new icon: a blue-silver upside-down key, alongside my other primary and foreign keys.

When I right-click it, the context menu gives me the option to "Script Constraint as", but when I try to "Modify" it, it opens the column design form (like primary keys), and yet, it appears in the "Indexes" section, not constraints.

Someone knows what it is called and what it is used for?

Upside-down key

One I selected the query from my product table with an additional Row_Number() column (Num), I'm getting the answer like below (I limited only 20 rows to retrieve).

Query

SELECT * 
FROM (SELECT ROW_NUMBER() OVER(ORDER BY ProductCode) AS Num, 
             ProductName
      FROM   BASE_Product 
      WHERE IsActive = 1) BASE_Product 
WHERE Num > 0 
AND   Num < (21)

Result

enter image description here

My requirement is how to skip each and every 3 rows and retrieve only 7 limit rows.

Expected result

I need the result such as below:

enter image description here

I have two instance database Oracle, they are ORCL11 and ORCL10,.... I've configured them, it looks like correct. Then, I connect to replication user admin ORCL10

conn [email protected]

And run SQL command in replication admin user ORCL10,...

BEGIN DBMS_REPCAT.RESUME_MASTER_ACTIVITY (gname=>'scott_repg');end;/

Then, i connect to replication admin user ORCL11 to see gname status.

conn [email protected];
select gname,status from dba_repgroup;

It display gname = SCOTT_REPG and status = NORMAL,.... but if connect to replication admin ORCL10

conn [email protected];
select gname,status from dba_repgroup;

It display that gname = SCOTT_REPG and STATUS = QUIESCING, I think it does not have any impact, so I test my replication on ORCL10, for example :

conn [email protected];
insert into DEPT values (101,'Financial');
select*from DEPT;

It works on table DEPT ORCL10 (1 row created), but when i switch to ORCL11 then run simple sql query :

conn [email protected];
select*from DEPT;

it doesn't show anything, i've searched it on Google but there is nothing tutorial how to fix that, I think the problem is about how to switch QUIESCING to NORMAL mode at ORCL10. I've try this way to fix it :

ALTER SYSTEM QUIESCE RESTRICTED;
ALTER SYSTEM UNQUIESCE;

But it still doesn't solve the problem,..... Would you help me to fix this problem ??? thanks !!!

I have a roads table:

+---------+---------+
| ROAD_ID | MEASURE |
+---------+---------+
|       1 |    1000 |
|       2 |     100 |
|       3 |     500 |
+---------+---------+

And I have a related linear referencing table called projects:

+------------+---------+--------------+------------+
| PROJECT_ID | ROAD_ID | FROM_MEASURE | TO_MEASURE |
+------------+---------+--------------+------------+
|        101 |       1 |            0 |         50 |
|        102 |       1 |          100 |        500 |
+------------+---------+--------------+------------+
|        103 |       2 |            0 |        100 |
+------------+---------+--------------+------------+
|        104 |       3 |          100 |        200 |
|        105 |       3 |          400 |        500 |
+------------+---------+--------------+------------+

Users manually enter projects into the attribute table in the G.I.S. application.

The workflow for entering new projects is:

  1. Create a blank new record.
  2. The user enters the ROAD_ID manually.
  3. The FROM_MEASURE has a default value of 0.
  4. The user enters the TO_MEASURE manually.

You can't tell from the sample table, but 9 times out of 10, projects pertain to an entire road (not just a portion of the road). An example is PROJECT #103.

Rather than make the user manually enter the TO_MEASURE, would it be possible to get these values automatically once the ROAD_ID is entered (by default)?

After ROAD_ID is updated, automatically get the related road's MEASURE value as the TO_MEASURE.

I'm designing a web app for managing mentor-mentee sessions. Mentees may be presented with different surveys with different types of questions. I would appreciate feedback/criticism on my design so far.

survey model

Surveys may come in different forms like a normal HTML form or a card sorting activity, or a table that the mentees have to fill.

Answers to questions may be in a yes/no, list form, a drawing, or plain text. Similarly, questions may be plain text, tables, fill in the blank etc. I created a content field for both question and answer in which I plan to store question/answer data in xml format, so if it's a table I can retrieve individual columns easily in the app logic. Is this a good idea?

I also created a separate logTimes table to track user logins and whether it was successful or not. A login may be from either a volunteer (mentor or supervisor) or a mentee, so one of idVolunteer or idMentee in this table will always be null. What do you think?