Quantcast
Channel: SCN : All Content - SAP Applications on SAP Adaptive Server Enterprise (SAP ASE)
Viewing all 956 articles
Browse latest View live

SAPBO4.1 On SAP ASE -ct_connect():directory service layer error(FWB00090)

$
0
0

Hello Team,

 

We have installed the Sybase Adaptive Server Enterprise on HP Unix and we are trying to install BO 4.1 on Windows 2012.

 

While installing the SAPBO 4.1 we have tried to use this Syabase ASE for CMS(Repository) but its throwing following error and says couldn't verify logins.

 

Any one knows what are the configuration settings needs to be done to suppress the below error.

 

Error:

 

ct_connect():directory service layer: internal directory control layer error: Requested server name not found. (FWB00090)

 

sybase error.png


SYBASE DBA COCKPIT ERROR

$
0
0

Hi Experts,

 

After restore I have an error in tx DBA COCKPIT:

 

 

SQL Message: [ASE Error SQL4939]ALTER TABLE 'saptools..DBH_SNAP_ASEINSTANCE' failed. You cannot drop column 'SOURCE_HOSTNAME' because it is being used by an index. Drop the index 'DBH_SNAP_ASEINSTANCE~1' before dropping this column

 

Any clue?

Effective usage of sampling for update statistics in ASE

$
0
0

Sampling for update statistics in ASE has been introduced a long time ago, in ASE version 12.5.0.3 which was released in 2003. For some reason, this very useful feature has not been advertised nor documented by Sybase very well. Actually, the whitepaper by Eric Miner still remains the most valuable and informative source about the sampling for update statistics, even though it was written more than 10 years ago. In my experience, many customers either don't use sampling for update statistics at all or do it in a less than optimal way. I would recommend you to read the whitepaper to better understand how sampling for update statistics works in general.

 

Recently, I implemented sampling for two different OEM customers that have just upgraded to ASE 15.7. The goal was to decrease the time for update statistics that used to take hours for big history tables and consume a lot of CPU and I/O. During my tests, I learned some new facts about sampling for update statistics and I'd like to share my findings here, I hope it will be useful.

 

There is a number of different variations of "update statistics" command available. Let's see what options are available for us in ASE 15.7:

 

update statistics table_name [index_name] - updates statistics on leading column only for the specified index, or on leading columns of all indexes on the table if the index name is not specified. Histograms on non-leading columns don't get updated, which is the main limitation of this syntax of the command. In my experience, absence or staleness of statistics on non-leading index columns may cause serious performance problems for queries, this is why I tend to not use this syntax of update statistics. The sampling cannot be used in this case, not even for leading index columns.

 

update index statistics table_name [index_name] - updates statistics on all columns of the index, or on all indexed columns of the table if the index name is not specified. In the latter case, if a column appears in more than one index, the statistics on such a column may be updated a number of times. The sampling can be used on non-leading columns only. Usually, I recommend this syntax as a default option because it is safe, simple and updates all the required statistics. However, in some cases, it may just take too much time and resources. In such cases, my approach that I'm going to describe later would be useful.

 

It is interesting that in some ASE releases optdiag may misleadingly report using of sampling on leading index columns, see CR 725185. The CR is still open and I saw this problem on different customer sites and on ASE versions 15.7 ESD#2 and upper. In such cases, optdiag reports the same sampling rate for leading index column(s) as for non-leading columns, but in fact the sampling on leading columns is not performed.

 

update statistics table_name (colA, colB, ...) - affects statistics in a way similar to what "create index" on the same columns set would do. The command updates histograms on the leading column only and gathers multicolumn statistics on column combinations like (colA, colB), (colA, colB, colC) etc. By multicolumn statistics I mean information about densities and unique values for groups of columns in a composite index, you will be able to find it in optdiag output easily.

 

update statistics table_name (colA),(colB),... - not to be confused with the previous syntax, each column is quoted by brackets. Updates histograms on all columns in the list and doesn't gather multicolumn statistics. It is like a shortcut for running multiple update statistics commands, each on a single column. Sampling can be used on all columns in the list.

 

Note that the list above doesn't contain hash-based update statistics, a new feature that comes with ASE 15.7 ESD#2. This feature is still functionally limited and given the number of CRs about it that I saw in cover letters of recent EBFs/SPs, I can conclude that this feature is not very mature yet. If you have a positive experience with hash-based update statistics in production systems - please let me know, it would be very interesting.

 

After some tests, I came to following guidelines:

1. If update index statistics without sampling is fast enough for you - just use it on all your tables.

 

2. You may discover that on bigger tables update index statistics without sampling is just too slow and consume too many resources. Then, you may try to use update index statistics with sampling. This will be useful mostly for composite indexes.

 

3. If the previous step is not effective enough or not relevant, try update statistics on column sets as they appear in indexes on the table. This should allow you to fully use the power of sampling. You can do it in two steps:

 

Suppose we have a table with a composite index on (colA, colB, colC) and we have decided that sampling of 3% is good enough to our needs. Then, we can update the histogram on the leading column and update the multi-column statistics:

 

update statistics table_name (colA, colB, colC) with sampling = 3 percent

 

At this stage, histograms on colB and colC will not be updated, so we can update it as follows:


update statistics table_name (colB),(colC) with sampling = 3 percent

 

That's all, now we have statistics on all three columns, including muticolumn statistics, up-to-date and we have used sampling for all columns in the index. You are welcome to try it on your tables and see the difference.

 

As to the sampling rate - I have found that if we are dealing with a relatively big table (at least millions of rows) then we can benefit greatly from very low sampling rates, as low as 1%, in most of the cases. I would recommend comparing optdiag outputs after applying of various sampling rates, the difference in terms of performance may be quite significant. For one of my customers, I discovered that decreasing of the sampling from 4% to 1% decreases the time required for update statistics by more than 3 times, and this without compromising the accuracy of column histograms.

 

I have prepared a test case based on the dataset that I used in my previous blog post, about materialized views in ASE, so fell free to ask me for test case details if you feel that it may be useful for you. However, in my opinion, it would be much better if you test my approach on your own data, because in most of the cases, your data distribution will be quite different from one in my test case.

 

The post has been originally published at Effective usage of sampling for update statistics in ASE - Database Diver's Diary

SAP loses connection to Sybase ASE database Could not connect to the server. JZ00M: Login timed out.

$
0
0

Hello all,

 

I'm currently having an issue on SAP system on Sybase ASE 15.7.0.013, where the SAP workprocess cannot reconnect to database.

 

When happens neither isql can connect to database:

 

JZ00M: Login timed out. Check that your database server is running on the host and port number you specified. Also, check the database server for other conditions (such as a full tempdb) that might be causing it to hang.

JZ00L: Login failed.  Examine the SQLWarnings chained to this exception for the reason(s).

Error code=0

SQL state=JZ00L


There is such space at tembdb.


We have checked the max connection pool but it isn't clear that is the real issue:


SELECT suser_name(suid),count(*) from master..sysprocesses group by suid;


,

'SAPSR3',104

(NULL),16

'jstask',4

'sapsa',3


sp_configure "number of user connections"

go

 

 

Parameter Name,Default,Memory Used,Config Value,Run Value,Unit,Type

'number of user connections    ','         25','      57999','         200','         200','number              ','dynamic             '

 

We have performed a trace of user connection , but we didn't get anything from this info:

 

dbcc traceon (11205)

go

 

Also we run a performance counter from Windows and we see that Sybase Kernel parameter were growing until database issue. Is ts_high parameter and also the Cache bytes peak from the Memory counter.

 

Please advice.

Thanks


how to update sybase ASE db from 15.7.0.021 to 101 on windows

$
0
0

Dear Gurus,

 

Wanted to upgrade my sybase ASE from 15.7.0.021 to  15.7.0.101 on sap ecc6.0 ehp6 and windows.

on cluster  (PRD)and non  cluster (DEV/QAS)

 

could you please let me know the process or redirect me to some hep full guide.

 

searched below link but not find any useful info for my scenario:

SyBooks Online

 

Regards

Balaji

Issue with sybase ase max memory

$
0
0

Hi Team,

 

I have a strange issue ,

 

1> sp_configure "max memory" , 0 , "12G"
2> go
Msg 10841, Level 16, State 1:
Server 'PS1', Procedure 'sp_configure', Line 1310:
The value of parameter 'max memory' '6291456' cannot be less than the 'total logical memory' size '7291579'. Please reconfigure 'max memory' to be greater than or equal to the total logical memory required.
(return status = 1)


I want to reduce the value max memory due to high swapping on the server.

1> sp_cacheconfig

2> go

Cache Name Status Type Config Value Run Value

------------------------------------------------------------------------ -------------------------------- -------------------------------- -------------------------------------------------------- ------------------------------------------------

default data cache Active Default 10240.00 Mb 10240.00 Mb

log cache Active Log Only 512.00 Mb 512.00 Mb

 

(1 row affected)

------------ ------------

Total 10752.0 Mb   10752.0 Mb

==========================================================================

Cache: default data cache,   Status: Active,   Type: Default

      Config Size: 10240.00 Mb,   Run Size: 10240.00 Mb

      Config Replacement: strict LRU,   Run Replacement: strict LRU

      Config Partition: 16,   Run Partition:           16

IO Size  Wash Size     Config Size  Run Size     APF Percent

-------- ------------- ------------ ------------ -----------

    16 Kb     512000 Kb   5120.00 Mb   5120.00 Mb     50

   128 Kb     512000 Kb   5120.00 Mb   5120.00 Mb     50

==========================================================================

Cache: log cache,   Status: Active, Type: Log Only

      Config Size: 512.00 Mb,   Run Size: 512.00 Mb

      Config Replacement: strict LRU,   Run Replacement: strict LRU

      Config Partition: 1,   Run Partition:            1

IO Size  Wash Size     Config Size  Run Size     APF Percent

-------- ------------- ------------ ------------ -----------

    16 Kb 2448 Kb     12.00 Mb     12.00 Mb     50

    32 Kb      61440 Kb    500.00 Mb    500.00 Mb     50

 

 

i am struggling to find out what else to reduce.

Not able to login by ISQL

$
0
0

Hi , Installed Ecc6 on EHp7 on Sybase database but not able to login to database isql.

I login by sybsid run command isql -Usapsso -SSID but giving below error:-

 

CS-LIBRARY error:

        comn_get_cfg: user api layer: internal common library error: Configuration section isql not found.

CT-LIBRARY error:

        ct_connect(): user api layer: external error: The connection failed because of invalid or missing external configuration.

 

 

Regards

Adil

Between Clause taking more time

$
0
0

Hello All,

 

I work on business objects reporting tool which gets data from sybase. We got requirement to change date prompts to dynamic which means user gets an input to choose date prompts to select like "Last year,last quater,current month ,last month etc. For example , when user select last week, I am trying to calcualate them from getdate() passing through between clause.

 

If I write the query as below for 1 day giving the dates as manually , it is running very fast less and I am getting data in seconds

 

SELECT sum(a.marks) FROM students a WHERE a.daily_Dte BETWEEN ( '2014-01-27' ) AND ( '2014-01-28' )

 

Now in the "between" clause I have just changed the dates to dynamic way like below and query is taking veryyy long time to execute

 

SELECT sum(a.marks) FROM students a WHERE a.daily_Dte between dateadd(dd,-1,getdate()) and  getdate()

 

My Sybase version is: Adaptive Server Enterprise/12.5.4/EBF 15432 ESD#8/P/Sun_svr4/OS 5.8/ase1254/2105/64-bit/FBO/Sat Mar 22 14:38:37 2008

 

Can anyone help me reason behind the query taking very long time to execute. I have just used simple getdate() function to calculate date for yesterday and pass the date dynamically but it is very long time to execute. Is their any setting missing which is causing this issue or do I need to specify anything before I run the query. This is not as Issue with indexing because the query is running fine when we give the dates manually. Can anyone please help.Thank you


Consultant Field Note - Replication re-syncing strategies

$
0
0

I always state that if the SAP SRS (SAP Replication Server) is up and running with no connections down, then I am assured that my data matches on the Primary and Standby sides.  However, due to a "resume connection ..... skip transaction" command or a mis-located process deleting or modifying records on the Standby side; the records on the Standby do not match the records on the Primary side.  Once discovered we need to have a re-syncing strategy depending upon our appetite for outages on both the Primary and Standby sides.   For example we might have a system that is 24 X 7 on both the Primary and Standby where we run our reports.  Or perhaps the database itself is terabytes in size making a dump and load scenario somewhat time-consuming.  We are even restricted by the under-lying replication configuration architecture.  It's up to you to judge the complexity verses time scenario.  Lets walk through some scenarios and the ways we can re-sync our replicated data.

 

For some of these options I am introducing the autocorrection concept.  Those of you who know the SAP SRS platform historically know that autocorrection was originally designed for table by table replication working with the Table Replication Definition declaration.  In later versions of the SAP SRS product this autocorrection concept can be applied on the database or connection level.  Check out the 'alter connection .... set dsi_convert...' syntax.  For the sake of argument we can have any Inserts and Updates on the primary side being transposed on the standby side as a Delete followed by an Insert.  (Note: In some cases even this autocorrection has its faults,  but let keep this discussion very simple). I am presenting a limited number of options to open up a discussion on this topic and to give some of you further design insights  on how to approach a replication architecture.  There are more options available and check out my previous blog topic for the newer option of MSA re-syncing: Consultant Field Note - Re syncing methods for MSA-aware databases.

 

Option 1: Rebuild the MSA/warm-standby database pair connection using a database dump and load.  This option certainly will solve any underlying replication issues that led to the databases being out of sync (except for errant Standby data modification connections).  Simply put:  a drop connection command is done followed by a create connection using a dump marker.  The dump of the primary database and a load into the standby database and lastly a resume connection from the SRS to the Standby database is done.  The Primary side does not need to be down during this operation and you can understand the Standby is non-operational until we reach a steady state where the load is completed and all replicated records contained in the outbound queue have been applied to the Standby database. This is good for environments where we can afford the Standby side to be out of commission for this time period.  This depends upon the size of the database and the volume of transactions being replicated compared to the time the business can afford until the Standby side is operational.

 

Option 2: Re-sync the MSA/warm-standby database pair connection using rs_subcmp. Every installation of the SAP SRS binaries comes with the rs_subcmp client program.  The rs_subcmp program operates on a table basis and issues select statements on the Primary ASE Server, gathers and resolves the out of sync records in the Standby tempdb and issues an I/U/D statement on the Standby database side for the table in question.  There are lots of reverse-engineering scripts that create runnable rs_subcmp programs by reading the database system tables.  Depending upon the failure we can read a subset of the data (select * from table where field inbetween (range) order by <primary key>) or we can consider all the rows in the table (select * from table order by <primary key>). This program is fast and efficient.  Personally its my top 'go-to' strategy to try first.  I would suspend the connection to the Standby side when I execute the command.  Autocorrection is a must here. This is good for environments where we can isolate the missing records to a table or a small number of tables.  Its also good for low volume replication instances.

 

Option 3: Stealth re-sync the MSA/warm-standby database pair connection using update statements. This option really takes advantage of the autocorrection we have set on the SRS to Standby connection.  It only works with a certain scenario where we have a datetiime field to work with and we can afford to change the time of the datetime field by 1 second.  The reason as to why we choose datetime as a likely candidate as we can easily change it back in the next transaction to the end result is the same record being on the Primary and Standby side.  If the issues an update statement with setting the field equal to the original value; that record would not be replicated, we must effect a change no matter how small an increment.  For example:

 

  • Step 1. update primary record set datetime field = datetime + 1 second 
  • Result: update gets replicated and applied as a delete followed by an insert 
  • Step 2. update primary record set datetime field = original datetime 
  • Result: update gets replicated and applied as a delete followed by an insert of the original row.

 

The difficulty in using this option is finding the right environment.  This might be used where we have an unlimited time for a re-sync and the select statements cannot be done on the Primary side.  We would involve a cursor functionality that could walk down the Primary table row-by-row and effect the changes.  Its good for low volume replication instances and the obvious fact is we need to do some prep work to have this be a viable strategy.  As you can imagine, not very common but it certainly can be used in a controlled scenario.

 

Option 4. Stealth re-sync the MSA/warm-standby database pair connection using update statements with bit manipulation. This option really is a variation on option 3.  Same concept but we have added a bit field to the table.  This field is used just for the sole purpose of allowing an update to occur with in turn forces the record to be replicated from Primary to Standby.  This is not a re-sync strategy but perhaps business-driven to choose certain records to be replicated at a certain point in time.  Such an option would have been applied at the modeling stage of the replication environment architecture.

 

Option 1 and 2 will be the most common of your re-syncing strategies.  Option 3 and 4 are special case strategies.  Some might even say 3 and 4 are a bit of a design stretch.

 

There is one obvious architecture we need to mention that makes options 1 to 4 unnecessary: the SRS Data Assurance option.  SAP Replication Server Data Assurance Option is licensed through SySAM license manager and is available on multiple platforms.

 

Replication Server Data Assurance Option compares row data and schema between two or more Adaptive Server® Enterprise databases, and reports discrepancies.  Replication Server Data Assurance Option is a scalable, high-volume, and configurable data comparison product. It allows you to run comparison jobs even during replication thereby eliminating any down time.SAP Replication Server with the Data Assurance option:

 

  • Compares rows and schemas
  • Creates script for reconciliation
  • Creates Data Manipulation Language (DML) commands for automatic reconciliation
  • Checks job status and generates reports

 

I plan to discuss the Data Assurance option in a later blog.

 

This Field Notes Series is dedicated to observations from the field taken from personal consulting experiences.   The examples used have been created for this blog and do not reflect any existing SAP Customer configuration.   Some material featured here was copied from: http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc01636.1571/doc/html/nro1290742490319.html

Sybase Support - What is SAP plans for its future?

$
0
0

As an official re-seller and representative of Sybase products in Israel, I have been using Sybase technical support facilities available through the old www.sybase.com portal.   As this slowly but steadily dies, what is the alternative SAP provides?

 

I need an urgent access to the EBFs download page in order to support one of my customers - what is the new link?  Is there a transition path for old credentials approved by Sybase to SAP so that I may continue to access the resources that were available to me in Sybase?

 

I need also an urgent access to Sybase case-express system in order to treat one of a critical cases with yet another Sybase local custmer - what is the link for that? 

 

It is very unhelpful that the transition from Sybase portal to SAP portal is not made invisible to all the users that were using the portal for crucial support functions approved by Sybase in the past.  It would be a good idea to track the active customers of the site (say, MySybase users with technical support privileges) and provide them alternative solution for the functions they were willing with their customers before these functions are disabled on Sybase.com.   Not doing this disrupts normal support operation with customers and creates bad reputation for both those catering the support and Sybase/SAP.

 

Any quick help on this available from SAP headquarters?

 

Thanks.

Sybase database architecture

$
0
0

Hello All,

 

Any one provide me the sybase database architecture standard diagram document & sybase architeture how many devices and whats the purpose of saptools database devices

Getting Started with SAP Applications Using SAP Adaptive Server Enterprise

$
0
0

SAP Adaptive Server Enterprise is SAP's high-performance relational database management system for mission-critical, data-intensive environments. This document gives you an overview of the setup for database installation and administration of an SAP ASE database that is run with the SAP system.

View this Document

sybase multiple database for crm

$
0
0

Hi All,

 

we are installing CRM 7.0 ehp3 with Sybase database on widows 2008 r2.

CRM 7.0 Abap Instance is installed and up and running however while performing java instance of crm on same machine.

its created a database which is the second database on the same machine.

and im getting an error saying database is already up and running.

 

Please let me know  does Sybase supports dual database on same physical machine.

 

Regards

Jawwad Ali.

CRM Java installation error in database creation

$
0
0

Hi All,

 

IM Installing CRM 7,0 ehp3  on windows 2008 r2.
and Abap instance is installed and up and running fine.

however when installing Java Instance I on the same server it stops at create data base server phase.

 

Error log

Sybase backup email alerts from DBACOCKPIT

$
0
0

I have scheduled Sybase backups in DBACockpit using central calendar.

Is there any way can we generate e-mail alerts upon successful or failure backups?

 

Appreciate your help on this..

 

Thank you,

Suresh


DBA Cockpit Error

$
0
0

I have the same problem reported here http://scn.sap.com/thread/3338296

 

I don't know what wrong. When executing t-code DBACOCKPIT, I get error

 

this is the detail

 

Category                       ABAP Programming Error

Runtime Errors              OBJECTS_OBJREF_NOT_ASSIGNED

Except.                         CX_SY_REF_IS_INITIAL

ABAP Program              CX_DBA_ROOT===================CP

Application Component  BC-DB-DB6

Date and Time               06.04.2013 10:21:52

 

Short text    

    Access via 'NULL' object reference not possible.

 

System environment

    SAP Release..... 731                                                                         

    SAP Basis Level. " "                                                                         

                                           

    Application server... "SAPSRV"                                                               

    Network address...... "192.168.19.128"

    Operating system..... "Windows NT" 

    Release.............. "6.1"

    Hardware type........ "2x AMD64 Level"

    Character length.... 16 Bits

    Pointer length....... 64 Bits

    Work process number.. 4

    Shortdump setting.... "full" 

                                                                                                 

    Database server... "SAPSRV" 

    Database type..... "SYBASE" 

    Database name..... "S01"

    Database user ID.. "SAPSR3"

                                                                                                 

    Terminal.......... "SAPSRV"                                                                  

                                                                                                 

    Char.set.... "C" 

                                                                                                 

    SAP kernel....... 720

    created (date)... "Mar 1 2012 02:06:42"

    create on........ "NT 6.0 6002 S x86 MS VC++ 16.00"

    Database version. "Sybase ASE 15.7.0.105 "                                                   

                                                                                                 

    Patch level. 210 

    Patch text.. " " 

                                                                                                 

    Database............. "15.7" 

    SAP database version. 720                                                                    

    Operating system..... "Windows NT 5.1, Windows NT 5.2, Windows NT 6.0, Windows               

     NT 6.1, Windows NT 6.2" 

                                                                   

    Memory consumption

    Roll.... 0      

    EM...... 8379584   

    Heap.... 0                     

    Page.... 40960                                                                               

    MM Used. 1200848

    MM Free. 2986272

 

User and Transaction                                                                             

    Client.............. 100                                                                     

    User................ "ADM"

    Language key........ "E"

    Transaction......... "DBACOCKPIT "

    Transaction ID...... "FD929EE2F5C3F1D6A1E8000C29178244"

                                                                                                 

    EPP Whole Context ID.... "000C291782441EE2A7D24D7523F441E8"                                  

    EPP Connection ID....... 00000000000000000000000000000000                                    

    EPP Caller Counter...... 0                                                                                                              

    Program............. "CX_DBA_ROOT===================CP"                                      

    Screen.............. "SAPLSDB6CCAD 0072"

    Screen Line......... 3                                                                       

    Debugger Active..... "none"

 

Information on where terminated                                                                  

    Termination occurred in the ABAP program "CX_DBA_ROOT===================CP" -                

     in "HANDLE_EXCEPTION".

    The main program was "SAPLSDB6CCAD ".

    In the source code you have the termination point in line 80

    of the (Include) program "CX_DBA_ROOT===================CM003".

    The termination is caused because exception "CX_SY_REF_IS_INITIAL" occurred in

    procedure "HANDLE_EXCEPTION" "(METHOD)", but it was neither handled locally nor

     declared

    in the RAISING clause of its signature.

                                                                                                 

    The procedure is in program "CX_DBA_ROOT===================CP "; its source

     code begins in line                                                                         

    1 of the (Include program "CX_DBA_ROOT===================CM003 ". 

Consultant Field Note - the great tempdb on master device debate

$
0
0

Recently a colleague of mine posed the question of the pros and cons of having the tempdb database partially on the master device.  By default, during the installation of an ASE Server, a small fragment of tempdb is placed on the master device (a device is a physical area of disk where a database sits upon).   Typically it is the size of the model database which is 2K to 4KB depending upon the ASE Server version.

 

This small fragment contains the default, system and log segments.  Segments are simply pointers that tell the SAP ASE Server where to locate the system tables, user tables and transaction log (syslogs) table respectively.

 

Onto the debate:  since version 4.2 of the ASE Server (many years ago), there has been a school of thought that says one should move the tempdb off of the master device onto a faster disk to effect better performance.  The premise being that the master device was a very secure area of disk with a RAID system that promoted mirroring and error checking.  All of this security comes at a price and that translated to slower disks.  Alternately tempdb, where internal worktables were stored, is typically placed on faster more volatile disks to promote better performance.   The debate was: to squeeze further performance gains from the SAP ASE Server, one should avoid placing any tempdb activity in the master device.  A classic hotspot avoidance strategy.

 

This strategy had many manifestations, some workable, many not.  You may recognize a couple of these.

 

Creation of dummy tables. The idea was we would fill up the first 2K or 4K of the tempdb database on the master device with a dummy table.  Once filled up, this space on the master device would be non-accessible and therefore any growth would take place on the faster tempdb devices.

 

Dropping the system, default and log segments from the master device. This strategy effectively removes any object location from the master device.  Objects are placed according to the underlying segments.  Removing the segments effectively removes the placement of the objects the segments refer to.

 

Dropping the tempdb area of disk from the master device.  ASE DBAs are forever tinkering with the system tables (caveat: not recommended for obvious reasons).  This strategy involves going into the sysusages table in the master database and removing the record that records the relationship between the tempdb database and the master device.  No record therefore no relationship.

 

There are more strategies but these are the most common.  The common thread for these strategies are they are ineffective either by their assumptions of performance gains or by the fact they are simply not recommended given the fact there are better architectures available today.

 

Personally I have had past issues with the presented strategies:  my Client removed the tempdb database from the master device and the ASE Server experienced a corrupt tempdb database.  I was called in to fix this issue.  A couple of special trace flags in the RUNSERVER files and rebuilding the tempdb database on the master device brought the ASE Server back to an operational state.  This took a considerable amount of effort and time.   This all could have been avoided if the tempdb database was left as originally placed on the master device.

 

Getting back to the debate, how does one balance performance by placing tempdb on faster disks verses the need for a stable and robust system?

Here is a strategy that gives us both.  As the DBA, when I log onto the system I typically run maintenance scripts such as update statistics or rebuilding tables, etc.  Following best practises, I would be using not the 'sa' login but I would be used another account such as 'sapsa.'  My Client logins would also have their own login names.

 

Using multiple or named tempdbs I would place these tempdbs on faster disks and I would assign my 'sapsa' user to a named tempdb and my normal Clients to another named tempdb.  Depending upon performance needs I would have a variation on the above.  The original 4K of the tempdb database on the master device remains unused and only accessed by the 'sa' login.  This strategy give the Client Applications and the 'sapsa' login the performance of a faster tempdb and protects the system from any tempdb failures by having the original 4K tempdb on the master device to aid in any rebuilding methods.  This tempdb architecture combined with a hard relationship between logins and tempdbs offers us both stability and performance.

 

This Field Notes Series is dedicated to observations from the field taken from personal consulting experiences.   The examples used have been created for this blog and do not reflect any existing SAP Customer configuration.

Serious issue dumping a 300Gb database on ASE 15.7 Esd 4.2

$
0
0

Hi,

 

We are facing a serious issue trying to dump a 300gb database :

we tried several methods and all of them leaded the server to stop responding at all

simple dump , dump with compression=1,dump with compression=100, dump with 5 stripes.

 

Currently I got no way out to quiesce/rcp devices/mount the database on the target server

 

Sybase ASE 15.7 Esd 4.2 on Solaris 10 on Sun M3000 disks arrays SCSI on zfs

 

We are thinking to move the database to 2 new internal sas HD's to see if it is a problem between Sybase and Zfs.

Sybase Restoration

$
0
0

Hi All,

 

Anyone has any idea about Sybase restoration or any specific SAP note(For sybase backup/restore) to state the same. I have taken the production backup and restoring in the quality system at the moment.

 

However, unsure about the DB post activities before starting the sap instance. Any one has suggestion about it, please let me know

 

Thanks and Regards

Vijay Kumar G

Change path of files

$
0
0

Hello,

 

How can I change the files of Sybase Ase from one old path to a new path? I need to move the log (saplog_1)  and saptemp.

 

 

Thanks.

Viewing all 956 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>