2007年3月14日 星期三

時間真快,3/18(日)就要考試了,
要考的是 "國立交通大學 96 學年度的碩士在職專班"
全名是"國立交通大學電機學院與資訊學院專班---資訊組"

目前有 135 個人報名這一組,(啃,還真多 ... = =" )
初試:筆試佔 30% ,資料審查:35%,複試:口試佔 35%
總共錄取 40 人,初試成績擇優直接錄取至多20名,
初試擇優40名再進行口試選20名(好像是)

希望萬能的天神可以給我三個願望:
... ... ... more ... ... ...
1. 考別人都不會,但我都會的 ;
2. 比我厲害的都缺考 ;
3. 嗯~再給我三個願望

就這樣!加油!!!

.End.

amzshar 發表在 痞客邦 留言(0) 人氣()

2006年4月6日 星期四

Embedded System Design Project
國立交通大學‧電機與資訊學院碩士在職專班‧94-2 嵌入式系統設計

■ Problem Statement :
Your company delivered a Windows-based application system to customers. Suppose that there are five tasks in the system, and three of them are more urgent than the other two. Let there be only three priority levels (i.e., H -> M -> L) available in Windows. As a compromise, you decided to give priority H to the three urgent tasks and priority M to the other two tasks. Priority L is left unused.


The above arrangement seems working fine. But one day your customer complained that your system usually exhibit poor performance. After some investigation, you found that the three tasks with priority H sometimes won’t give up the CPU for a very long period of time. As a result, those two tasks with priority M have no chance to be serviced by the CPU unless those tasks with priority H voluntarily relese the CPU.

Now your boss ask you to implement a resource-reservation mechanism. Under this mechanism, each application is associated with a parameter, namely, “_share”. For example, if a task is assigned to _share(100, 200) then the task requires 100 ms of CPU time every 200 ms. By adopting this concept, each application can receive its desired share of CPU, and the trouble mentaioned above could be eliminated.

To reflect the urgency of different application, parameter “_ungency” is assigned to each application. Every application can be assigned to a unique urgency level. When there are two applications contending the CPU, the applications of higher-level of urgencies always win. Furthermore, it is permitted that less urgency applications are preempted by highly urgent applications.

There are two implementation issues : First, there is no access to source code of a proprietary system like Windows, so everything you do must be transparent to the operating system. Second, there are only three different priority levels in your existing system and you have to emulate that each application could have their unique urgency levels.

■ Check list of stuff to deliver

● An application with a simple GUI, by which users can dynamically adjust parameter “_share” ans “_urgency” to applications. *Note* target applications can be any ordinary application being running in Windows, like WORD, IE, etc.

● A short document (5 pages, 12pt font, single line space) on your implementation. Describe how you emulate a number of urgency levels by using only three priority levels. Also describe how you verify that your approach is correct.

amzshar 發表在 痞客邦 留言(0) 人氣()

2007年12月16日 星期日

.
. IBM WebSphere MQ v6.0
. Ex 6 - IBM WebSphere MQ Client Implementation.
. 讓我們來練習一下 MQ Client Implementation
.

=== Exercise 6 : WebSphereMQ Client Implementation ===
What we will do :
A. Configure a Server for client connection.
B. Configure a client.
C. Test a client to server environment.
D. Use Auto-Definition of a CHANNEL.
E. Setup and perform Remote Administration.


=== Sample programs for MQ client ===
1. # amqsputc QName [QMgrName]
(This program is invoked the same way as amqsput and has the same parameter structure. But it connects to a WebSphereMQ client instead of a WebSphereMQ Server.)

2. # amqsbcgc QName [QMgrName]
(This program is invoked the same way as amqsbcg and has the same parameter structure but it connects to a WebSphereMQ client)

3. # amqsgetc QName [QMgrName]
(This program is invoked the same way as amqsget and has the same parameter structure but it connects to a WebSphereMQ client)


======================================================
[A. Server Queue Manager setup]
1. Create Queue Manager named QMC06 , and QMC07R :
 # crtmqm QMC06   # crtmqm QMC07R
 # strmqm QMC06   # strmqm QMC07R
 # runmqsc QMC06   # runmqsc QMC07R

 (on QMC06)
 : DEF QL(DLQ) REPLACE
 : ALTER QMGR DEADQ(DLQ)

 : DEF QL(XQMC07R) REPLACE USAGE(XMITQ)
 : DEF CHL(QMC06.TO.QMC07R) CHLTYPE(SDR) REPLACE +
  TRPTYPE(TCP) CONNAME('Host2(9007)') XMITQ(XQMC07R)


 (on QMC07R)
 : DEF QL(DLQ) REPLACE
 : ALTER QMGR DEADQ(DLQ)

 : DEF CHL(QMC06.TO.QMC07R) CHLTYPE(RCVR) REPLACE +
  TRPTYPE(TCP)
 : DEF QL(QL.A) REPLACE

 (on QMC06)
 : DEF QR(QRMT07R) REPLACE +
  RNAME(QL.A) RQMNAME(QMC07R) XMITQ(XQMC07R)


 (on QMC07R)
 # runmqlsr -m QMC07R -t TCP -p 9007

 (on QMC06, CHANNEL 啟動狀態)
 # runmqchl -C QMC06.TO.QMC07R -m QMC06
 # amqsput QRMT07R QMC06  (測試通道是否暢通)

2. Define a SVRCONN CHANNEL on QMC07R to make it connectable by clients :
 a. Use QMC07R_CLNT as the CHANNEL name.
 b. Protocol is TCP.

 # runmqsc QMC07R
 : DEFINE CHL(QMC07R_CLNT) CHLTYPE(SVRCONN) REPLACE TRPTYPE(TCP)

3. Be sure that an appropriate Listener function is avtive for the Server QM.
 # runmqlsr -m QMC07R -t TCP -p 9007

[B. Client setup (Method 1)]
4. Use the MQSERVER environment variable to provide a client-connection CHANNEL Definition to be able to connect to the Queue Manager.

 (UNIX / Linux Systems)
 # export MQSERVER=QMC07R_CLNT/TCP/QMC07R(9007)

 (Windows Systems)
 # SET MQSERVER=QMC07R_CLNT/TCP/QMC07R(9007)

[C. Test the Client connection (Setup Method 1)]
5. Use amqsputc to put messages on the Local Queue QL.A on the Server :
 # amqsputc QL.A QMC07R

6. Use amqsbcgc to browse the message on the Server Queue.
 (The value of Reply-to-QMgr in the MQMD will show.)
 # amqsbcgc QL.A QMC07R

7. Use amqsgetc to get the messages from the Server Queue.
 # amqsgetc QL.A QMC07R

[D. Server Queue Manager setup using Auto-Definition of CHANNELs]
8. Enable CHANNEL Auto-Definition in Queue Manager.
 so all teams are able to connect to Queue Manager
 : ALTER QMGR CHAD(ENABLED)

[E. Client Setup (Method 2)]
9. Use QMC07R to build a client CHANNEL definition table to enable a WebSphereMQ
 client to connect to each Queue Manager which has enabled CHANNEL Auto-Definition :
 a. Create 2 client connection CHANNEL entries to connect to QMC07R.

 (On QMC07R)
 : DEF CHL(CLNT_A) CHLTYPE(CLNTCONN) REPLACE +
  TRPTYPE(TCP) CONNAME('QMC07R(9007)') QMNAME(QMC07R)

 : DEF CHL(CLNT_B) CHLTYPE(CLNTCONN) REPLACE +
  TRPTYPE(TCP) CONNAME('QMC07R(9007)') QMNAME(QMC07R)


[F. Test the Client connection (Setup Method 2)] (需安裝MQ Client並設定)
10. On the client system ensure the following environment variables are point to the just
 created client CHANNEL definition table. Be sure to unset the MQSERVER.
 a. MQCHLLIB=
 b. MQCHLTAB=

 [UNIX Systems]
 Default location on the creating Queue Manager :
  export MQCHLLIB=/var/mqm/qmgrs//@ipcc
  export MQCHLTAB=amqclchl.tab
  export MQSERVER=

 [Windows Systems]
  SET MQCHLLIB=..\mqm\qmgrs\\@ipcc
  SET MQCHLTAB=amqclchl.tab
  SET MQSERVER=

11. Use amqsputc again to put a message to QL.A on QMC07R and ensure the
 operation is completed successfully. Verify that the new server connect CHANNEL
 CLNT_A is now defined on QMC07R.
 # amqsputc QL.A QMC07R

12. Stop CHANNEL CLNT_A . Then use amqsputc to put a message to QL.A . Verify
 that the new server connect CHANNEL CLNT_B is now defined on QMC07R, and
 that the message successfully arrived on the Queue.
 # runmqsc QMC07R
 : stop CHANNEL(CLNT_A)

 # amqsputc QL.A QMC07R


======================================================
======================================================
[G. Setup and perform Remote Administration.]
1. This requires that the managing Queue Manager be the default Queue Manager :

2. # runmqsc -w 15

3. : DISPLAY QMGR

 

amzshar 發表在 痞客邦 留言(0) 人氣()

2007年12月16日 星期日

.
. IBM WebSphere MQ v6.0
. Chapter 6.3 - IBM WebSphere Security
.

=== WebSphere MQ Security Implementations ===
1. Object Authority Manager (OAM) facility.
2. CHANNEL Security using Secure-Sockets-Layer (SSL).

(※ MQ 只有授權Authorization, 沒有認證Authentication)


=== WebSphere MQ Access Control Overview ===
1. WebSphereMQ access control at user and/or group level :
 - UNIX use groups only (Username must exist, everyone is in nobody.)
 - Windows uses userids and/or groups.
 - System-level userids only are supported
  (No support for DCE principals, TXSeries userids, and so forth.)
2. Firsy level name only is controlled :
 - Alias Queues, Remote Queues.
 - Resolved name is not significant.


=== Object Authority Manager Installable Service ===
1. [ WebSphereMQ QMgr <---> Object Authority Manager (OAM) Access Control Lists ]
2. Access Control for WebSphereMQ objects :
 - Queue Manager   - Queues
 - Processes     - Namelists
 - Channels      - Authentication information objects
 - Listeners      - Services
3. OAM can be disabled :
 - Remove entry from mqs.ini or Windows Registry
 - Not recommended
 - Very difficult to re-establish uniform authority checking


=== Object Authority Manager : Access Control Lists ===
1. One authority file per object :
 - lus global permissions files.
2. Each file has one stanza per principle :
 - Principal (User)
 - Authority='bit pattern'
3. Windows OAM bypasses auth files for certain classes of principal
 - SYSTEM, local Administrators group, local mqm group


=== Security Management : setmqaut ===
1. Change the authorizations :
 - Queue Manager   - Queues
 - Processes     - Namelists
 - Channels      - Authentication information objects
 - Listeners      - Services
2. Principal or group level control
3. Granular control of access
 - No generic functions
 - Supports generic profiles

 # setmqaut -m QMgr -t Objtype -n Profile [-p Principale -g Group] permissions
 Example :
 # setmqaut -m QM1 -t queue -n MOON.* -g GP1 browse + get

4. Note that there are certain principals/groups which are granted automatic access to resources. These are :
 - mqm (user/group)
 - For Windows :
  a. Administrator (user/local group)
  b. SYSTEM (userid)
  c. The user (or principal group) which creates a resource.


=== Security Management : dspmqaut ===
1. Display current authorizations :
 - Queue Manager   - Queues
 - Processes     - Namelists
 - Channels      - Authentication information objects
 - Listeners      - Services
2. Principal or group level control.

 # dspmqaut -m QMgr -t ObjType -n ObjName [-p Principal -g Group ]
 Example :
 # dspmqaut -m QM1 -t q -n QL.Q1 -p mquser
 Entity mquser has the following authorizations for object QL.Q1 :
 get
 browse
 put ...


=== Security Managemnet : dmpmqaut ===
1. Dump current authorizations :
 - QMGR
 - Queues
 - Processes
 - Namelists
 - Authinfo (SSL CHANNEL Security)
 - Channels
 - Listeners
 - Services
2. Principal or group level control.

 # dmpmqaut -m QMgr -t ObjType -n Profile [-p Principal -g Group ]
 Example :
 # dmpmqaut -m QM1 -n a.b.c -t q -p mquser
 The resulting dump would display :
 profile : a.b.*
 object type : queue
 entity : user1
 type : principal
 authority : get, browse, put, inq


=== Access Control for WebSphereMQ Control Program ===
1. Most WebSphereMQ control programs
 - Such as crtmqm, strmqm, runmqsc, setmqaut, dspmqaut, dmpmqaut
2. Have restricted access :
 - UNIX/Linux restricts users to the mqm group
  a. Configuration as a part of WebSphereMQ installation.
  b. Control imposed by the O.S. not OAM.
 - Windows allows :
  a. mqm group
  b. Administrators group
  c. System userid
 - OpenVMS restricts users to those granted the MQM identifier.
 - Compaq NSK allows :
  a. MQM group
  b. SUPER.SUPER ID


=== Authority Checking in the MQI ===
1. MQI calls with security checking :
 - MQCONN / MQCONNX
 - MQOPEN
 - MQPUT1 (implicit MQOPEN)
 - MQCLOSE  (For Dynamic Queues).
2. WebSphereMQ events as audit records :
 - Events written to SYSTEM.ADMIN.QMGR.EVENT Queue.
 - Documented in Monitoring WebSphereMQ manual.
3. Reason code MQRC_NOT_AUTHORIZED (2035) returned if not authorized.
4. The MQCLOSE is generally not checked because the close options are usually none.
5. If the close options are set to MQCO_DELETE or MQCO_DELETE_PURGE (this is only for permanent
  Dynamic Queues) then, unless the Queue was created using the current handle, there is a check to
 determine if the user is authorized to delete the Queue.


=== Security and Distributed Queuing === ☆
1. Put authority :
 - Option for the receiving end of a message CHANNEL.
  a. Default user identifier is used.
  b. Context user identifier is used.
2. Transmission Queue :
 - Messages destined for a Remote Queue Manager are put on a Transmission Queue by the
  Local Queue manager
  a. An application should not notmally need to put messages directly on a Transmission Queue,
   or need authority to do so.
 - Only special system programs should put messages directly on a Transmission Queue should
  have the authority to do so.

=== Message Context ===
1. Information about source of message :
 - Identity section (user related)
 - origin section (program related)
2. Part of message Descriptor.
3. Can be passwd in related message.
4. Message context information allows for the application that retrieves a message to find out about
 the Originator of the message.The retrieving application may want to :
 a. Check that the sending application has the corrent level of authority.
 b. Keep an audit of all the messages it has worked with.
 c. The information is held in two field : Identify context and Origin context.


=== The Context Fields ===
An application can request the Queue Manager to set the context fields of a message by using the put message option MQPMO_DEFAULT_CONTEXT on an MQPUT or MQPUT1 call. This is the default action if no context if specified.

( ps : 用 # amqsbcg Queue QMgr 可以看到 )

1. Identify context :
 - UserIdentifier (user that originated the message.)
 - AccountingToken
  a. Windows (SID, Security ID in compressed format)
  b. i5/OS (Job accounting code)
  c. UNIX (Numberic user ID in ASCII characters)
 - ApplIdentityData (Blank)
2. Orign context :
 - PutApplType (MQAT_AIX, MQAT_CICS...etc.)
 - PutApplName
 - PutDate ( YYYYMMDD(GMT) )
 - PutTime ( HHMMSSTH(GMT) )
 - ApplOriginData (Blank)


=== No Context ===
1. Requested by a put message option :
 - MQPMO_NO_CONTEXT
 - Queue Manager clears all the context fields, specifically.
 - PutApplType is set to MQAT_NO_CONTEXT
2. To request "Default Context" or "No Context" requires no more authority than that required to put the message on the Queue.


=== Passing Context ===
A → [Queue1] → B → [Queue2] → C

1. Put messages on Queue2 with same Identity context as message taken from Queue1
2. Open Queue1 as "Save All Context"
3. Put messages with "Pass Identity Context"
4. Or transfer "No Context"


=== Alternate User Authority ===
A → [Queue1] → B → [Queue2] → C

1. Put messages with A's authority :
 - B needs appropriate authority.
 - UserID taken from message Context.
2. How it is requested ? :
 - AlternateUserID field in Object Descriptor.
 - Option on MQOPEN or MQPUT1


=== Setting Context ===
1. Two open options that require authority to use :
 - MQOO_SET_IDENTITY_CONTEXT
 - MQOO_SET_ALL_CONTEXT
2. Two corresponding put message options :
 - MQPMO_SET_IDENTITY_CONTEXT
 - MQPMO_SET_ALL_CONTEXT
3. Normally used by special programs only :
 - Message CHANNEL agents
 - System utilities


=== CHANNEL Exit Programs ===
MQPUT → TransmissionQueue → [Message] → MCA → Send →
                     
MQGET ← DestinationQueue ← [Message(retry)] ← MCA ← Receive ←
                     
                     

1. The uses of CHANNEL Exit programs are :
 - Auto-definition Exit can be used to modify the CHANNEL definition derived from
         the model SYSTEM.AUTO.RECEIVER
 - Security Exit is primarily used by the MCA at each end of a message CHANNEL
         to authenticate its partner.
 - Send and Receiver Exites can be used for purposes such as data compression
         / decompression and data encryption / decryption.
 - Message Exit can be used for any purpose which makes sense at the message
         level. The following are some examples :
     a. Application data conversion
     b. Encryption / decryption
     c. Journaling (日誌)
     d. Additional security checks such as validating an incoming user identifier.
     e. Substitution of one user identifier for another as a message enters a new
      security domain.
     f. Reference message handing.
 - Message-retry Exit is called an attempt to open a destination Queue, or put a
          message on a destination Queue, has been unsuccessful. The
          exit can be used to determine under what circumstances the
          MCA should continue to retry, how many times it should retry,
          and how frequently.
2. The Auto-Definition Exit is only supported on WebSphereMQ for AIX. HP-UX, iSeries,
 Solaris, and Windows, and MQSeries for Compaq Tru64 UNIX and OS/2 Warp V5.1


=== CHANNEL Exit Programs on MQI CHANNELs ===
                        [Auto-Definition]
          [Security]          [Security]
 MQCONN ←→        Send Receive
 MQOPEN ←→ CLNTCONN ←——————→ SVRCONN
  MQPUT ←→ 

1. No CHANNEL Exit Programs can be called on a client system if the MQSERVER
 environment variable is used to define a simple client conenction.
2. The Auto-Definiition Exit can be used to modify the CHANNEL definition derived
 from the model SYSTEM.AUTO.SVRCONN


=== Secure Sockets Layer ===
1. Protocol to allow transmission of secure data over an insecure network.
2. Combines these techniques :
 - Symmetric / Secret Key encryption
 - Asymmetric / Public Key encryption
 - Digital Signature
 - Digital Certificates
3. Protection :
 - Client / Server
 - Qmgr / QMgr CHANNELs
4. To combat Security Problems :
 - Eavesdropping (竊聽) ← Encryption techniques
 - Tampering (竄改、瞎搞) ← Digital Signature
 - Impersonation (偽裝) ← Digital Certificates


=== QMGR Attributes for SSL ===
1. ALTER QMGR command :
 - SSLKEYR  Sets the SSLKeyRepository.
 - SSLCRLNL  Sets the SSLCRLNamelist.
 - SSLCRYP  Sets the SSLCryptoHardware.
 - SSLTAKS  Sets the SSLTasks.
 - SSLEV   Enables or Disables SSL event messages.
 - SSLFIPS  Specifies if only FIPS-certified algorithms can be used.

ps : CRL (Certificate Re )

=== QMGR Authentication Object ===
1. ALTER AUTHINFO
2. DEFINE AUTHINFO
3. DELETE AUTHINFO
4. DISPLAY AUTHINFO


=== Channel Attributes for SSL ===
1. DEFINE or ALTER CHANNEL
 - SSLCIPH (Cipher 譯文)
 - SSLPEER
 - SSLCAUTH


=== Access Control for a WebSphereMQ Client ===
1. Access control is based on a user ID used by the server connection process :
 - Value of MCAUserIdentifier in MQCD determines this user ID
2. Security Exits at both ends of the MQI CHANNEL :
 - Client Security Exit can flow a user ID and password
 - Server Security Exit can authenticate the user ID and set MCAUserIdentifier
3. No security Exit at the client end of the MQI CHANNEL :
 - Value of logged_in USERID flows to the server system.
 - Server Secutiry Exit can authenticate the user ID and set MCAUserIdentifier
4. No Security Exit at either end of the MQI CHANNEL :
 - MCAUserIdentifier has the value of MCAUSER if it is nonblank.
 - MCAUserIdentifier has the value of flowed user ID otherwise.


=== Remote Queuing and Clients ===
1. CHANNEL Exits :
 - A number of CHANNEL Exits are available in the product and as SupportPacs
 - Serveral vendors in this market too.
2. MCAUSER :
 - The default setting is wide open, especially for client attach.
 - May want to set this to restrict who can access your Queue Manager.
3. MQ_USER_ID environment variable :
 - This war removed for WindowsNY and UNIX in the 5.1 release client env.
 - The logged-in username is now automatically used.
 - But this is not authenticated at the server ; you may still need security Exits.

amzshar 發表在 痞客邦 留言(0) 人氣()

2007年12月16日 星期日

.
. IBM WebSphere MQ v6.0
. Chapter 6.2 - IBMWebSphere MQ Clients
.

=== WebSphere MQ Client ===
 ☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆
  Client-System      Server-System
 WMQ-Application      WMQ-Queue-Manager
 Client-Connection      Server-Connection
 Communications-stack→→→Communications-stack
         MQI-CHANNEL
 ☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆
1. Assured delivery.
2. Queue storage.
3. Data conversion.
4. Administration.
5. Recovery.
6. Syncpoint control.


=== MQI Clients Explained ===
1. The full range of MQI calls and options is available to a WebSphereMQ client
 application, including the following :
 - The use of MQGMO_CONVERT option on the MQGET call. This causes the
  application data of the message to be converted into the numberic and
  character representation in use on the client system. The server Queue
  Manager provides the usual level of support to do this.
 - A client application may be connected to more than one Queue Manager
  simultaneously. Each MQCONN call to different Queue Manager returns a
  different connection handle. This does not apply if the application is not
  running as a WebSphereMQ client.
2. The MQI stub which is linked with an application when running as a client is
  different from that used when the application is not running as client. An
  application will receive the reason code MQRC_Q_MGR_NOT_AVAILABLE
  on an MQCONN call if it is linked with the wrong MQI stub.

 ☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆
 1. MQCONN ---> (Queue Manager)
  2. QMOPEN ---> (Queue)
   3. MQPUT / MQGET / MQINQ / MQSET

  MQBEGIN
   MQPUT / MQGET
   IF successful -> MQCMIT
   ELSE MQBACK

  4. MQCLOSE ---> (Queue)
 5.MQDISC ---> (Queue Manager)

 ☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆


=== Syncpoint Control on a Base Client ===
1. A WebSphere MQ client application may participate in a Local unit of work
 involving MQSeries resources.
 - Uses the MQCMIT and MQBACK calls for this purpose.
2. A WebSphere MQ client application cannot participate in a Global unit of
 work involving WebSphereMQ resources.


=== Extended Transactional Client ===
1. An Extended Transactional Client can participate in a Global unit of work :
 - Transaction manager runs on client system.
 - Transaction manager provides syncpoint processing.

=== MQ Client Installation ===
(略)


=== Defining an MQI CHANNEL === (MQ Client CHANNEL)
1. Use the DEFINE CHANNEL command with parameters :
 - CHLTYPE  CLNTCONN or SVRCONN (SVRCONN為client專用)
 - TRPTYPE  DECNET, LU62, NETBIOS, SPX or TCP.
 - CONNAME(string)  For a client connection only.
 - QMNAME(string)  For a client connection only.
2. No operational involvement on an MQI CHANNEL :
 - An MQI CHANNEL starts when a client application issues MQCONN
  (or MQCONNX)
 - An MQI CHANNEL stops when a client application issues MQDISC
3. Do not forget to configure and refresh the inet daemon, or to start the
 WebSphereMQ Listener, on the server system.


=== Two ways of Configuring an MQI CHANNEL ===
1. Method_1 :
 - On the server system, define a server connection.
 - On the client system, set the environment variable.
 - MQSERVER=ChannelName/TransportType/ConnectionName

 (Windows : SET MQSERVER=VENUS.SVR/TCP/hostname(port) )
 (UNIX : export MQSERVER=VENUS.SVR/TCP/hostname(port) )

2. Method_2 :
 - On the server system, define a client connection and a server conection.
 -If not on a file server, copy the client CHANNEL definition talbe from the server
  system to the client system.
 - On the client system, set the environment variables :
  a. MQCHLLIB= 
   Path to the directory containing the client CHANNEL difinition table.
  b. MQCHLTAB=
   Name of the file containing the client CHANNEL definition table.

  (Windows : SET MQCHLIB=C:\MQM
        SET MQCHTAB=AMQCLCHL.TAB
  (UNIX : export MQCHLIB=/mqmtop/qmgrs/QUEUEMANAGERNAME/@ipcc
      export MQCHLTAB=AMQCLCHL.TAB )


=== Auto-Definition of CHANNELs ===
1. Applies only to the end of a CHANNEL with type :
 - Receiver
 - Server connection
2. Function invoked when an incoming request is received to start a CHANNEL
 but there is no CHANNEL definition.
3. CHANNEL definition is created automatically using the model :
 - SYSTEM.AUTO.RECEIVER
 - SYSTEM.AUTO.SVRCONN
4. Partner's values are used for :
 - CHANNEL name.
 - Sequence number wrap value.
5. To enable the automatic definition of CHANNELs, the attribute ChannelAutoDef
 of the Queue Manager object must be set to MQCHAD_ENABLED.
 The Corresponding parameter on the ALTER QMGR command is CHAD(ENABLED)
6. CHANNEL auto-definition events can be enabled by setting attribute ChannelAutoDefEvent
 of the Queue Manager object must be set to MQCEVR_ENABLED.
 The Corresponding parameter on the ALTER QMGR command is CHADEV(ENABLED)


=== Let Queue Manager accessed by MQ Explorer ===
(☆☆☆ by AaA ☆☆☆)
1. SYSTEM.ADMIN.SVRCONN (Windows default, UNIX/Linux need to add manualy)
2. # runmqsc QM1
  : DIS CHANNEL(SYSTEM.ADMIN.SVRCONN)
  : ALTER CHANNEL(SYSTEM.ADMIN.SVRCONN) CHLTYPE(SVRCONN) MCAUSER('mqm')
  (MCAUSER原本空白,表示檢查UserID/Group。 而指定mqm表示連上來ㄉ,都自動以mqm登入)

amzshar 發表在 痞客邦 留言(0) 人氣()

2007年12月16日 星期日

.
. IBM WebSphere MQ v6.0
. Chapter 6.1 - WebSphereMQ Family SupportPacs
.

=== WebSphereMQ Family SupportPacs ===
http://www.ibm.com/software/integration/support/supportpacs

1. MO01 ( Event and Dead Letter Queue Monitor ) :
 This SupportPac is the MQSeries Event queue monitor, Dead Letter queue monitor and Expired message remover for Windows, Java, OS/2 and AIX.

2. MS03 (Save Queue Manager object definitions using PCFs (saveqmgr) ) :
 This SupportPac (saveqmgr) saves all the objects, such as queues, channels, etc, defined in a either local or remote queue manager to a file.

amzshar 發表在 痞客邦 留言(0) 人氣()

2007年12月15日 星期六
 

.
.讓我們來練習一下 Queue Manager Clusters
.

=== Exercise 5 : Queue Manager Clusters ===
What we will do :
A. Create Clusters.
B. Define all required WebSphereMQ objects for Queue Manager Clusters.
C. Test and Configure Clusters.
D. Manage workload in Clusters.

======================================================
[QM1]————————————
 ∣Cluster Transmission Queue∣
 ∣       [ Cluster-Sender CHANNEL ] →→ [QM3]
 ∣Local Appication Queues  ∣          ↙
 ∣            ∣    [QM2]  ↙
 ∣Cluster Command Queue ∣   ↙     ↙
 ∣        [ Cluster-Receiver CHANNEL ][QM4]
 ∣Cluster Repository Q   ∣
  —————————————


======================================================
[A. Set up the Cluster connections.]
1. Create a new default Queue Manager QM1 to be used in a Queue Manager Cluster.
 # crtmqm -q QM1   (-q : 為 default QM)
 # crtmqm QM3
 
2. Start the Queue Manager :
 # strmqm QM1
 # strmqm QM3
 
3. Start the Listener function for your Queue Manager QM1 on port 9051
  using the WebSphereMQ Listener.
 # runmqlsr -m QM1 -t tcp -p 9051
 # runmqlsr -m QM3 -t tcp -p 9053
 
4. Define the Cluster connection objects required for your Queue Manager.
  The Objects needed should include the following :
 a. One Local Queue to be used as Dead Letter Queue.
   # runmqsc QM1
   # runmqsc QM3
   : DEF QL(DLQ)
   : ALTER QMGR DEADQ(DLQ)
   : DIS QMGR DEADQ   (驗證Dead Letter Queue)
 b. One Cluster Receiver CHANNEL (CLUSRVR) pointing to the owning QM.
  (On Every Queue Manager in Cluster)
  ☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆
  DEF CHL(TO.CLUS_A9.QM#) CHLTYPE(CLUSRCVR) REPLACE +
   TRPTYPE(TCP) CONNAME('Hostname(905#)') +
   SHORTRTY(600) SHORTTMR(60) DISCINT(30) CLUSTER(CLUS_A9)

  ☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆
  ( # = your QM in Cluster , 這邊先在 QM1 設接收端)
  DEF CHL(TO.CLUS_A9.QM1) CHLTYPE(CLUSRCVR) REPLACE +
   TRPTYPE(TCP) CONNAME('localhost(9051)') +
   SHORTRTY(600) SHORTTMR(60) DISCINT(30) CLUSTER(CLUS_A9)
   ( ps : 其實在這邊一執行, 在MQ Explorer就會出現CLUS_A9)

  ( # = your QM in Cluster , 另一個在 QM3 設接收端)
  DEF CHL(TO.CLUS_A9.QM3) CHLTYPE(CLUSRCVR) REPLACE +
   TRPTYPE(TCP) CONNAME('localhost(9053)') +
   SHORTRTY(600) SHORTTMR(60) DISCINT(30) CLUSTER(CLUS_A9)

 c. One Cluster Sender CHANNEL pointing to a (the other) Repository
  Queue Manager in your Cluster.
  ☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆
  DEF CHL(TO.CLUS_A9.QM*) CHLTYPE(CLUSSDR) REPLACE +
   TRPTYPE(TCP) CONNAME('Hostname(905*)') +
   SHORTRTY(600) SHORTTMR(60) DISCINT(30) CLUSTER(CLUS_A9)

  ☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆

  ( * = Full Repository , 這邊在在 QM3 送出)
  DEF CHL(TO.CLUS_A9.QM1) CHLTYPE(CLUSSDR) REPLACE +
   TRPTYPE(TCP) CONNAME('localhost(9051)') +
   SHORTRTY(600) SHORTTMR(60) DISCINT(30) CLUSTER(CLUS_A9)


  ( * = Full Repository , 另一個 QM1 送出)
  DEF CHL(TO.CLUS_A9.QM3) CHLTYPE(CLUSSDR) REPLACE +
   TRPTYPE(TCP) CONNAME('localhost(9053)') +
   SHORTRTY(600) SHORTTMR(60) DISCINT(30) CLUSTER(CLUS_A9)

 d. If your Queue Manager is to be a Full Repository,
  ALTER the Queue Manager to include the Cluster name.
   ALTER QMGR REPOS(CLUS_A9)   (要加入Cluster的QM則執行)

  (ps : 這邊也是,可以Check一下,CLUS_A9完整生出來了... lol )
   Verify on QM1 => DIS CLUSQMGR(*)
   Verify on QM1 => DIS CHSTATUS(*)

   Verify on QM1 => PING CHL(TO.CLUS_A9.QM3)
   Verify on QM3 => PING CHL(TO.CLUS_A9.QM1)
  ( 成功的話, => AMQ8020: 連通測試 WebSphere MQ 通道完成。)
  ( 失敗的話, => AMQ9547: 遠端通道的類型不適合所要求的動作。 )
 AMQ9547 : Type of remote channel not suitable for action
 Cause :
  Its is not possible to start a Cluster Receiver CHANNEL that uses the group Listener port.
 Solution :
  Start a non-shared Listener (INDISP(QMGR)) and ALTER the Cluster Receiver CHANNEL to
  use its port number rather than the group Listener port.

5. Wait until all CHANNELs timed out upon the DISCINT value.
 
6. What is the CURDEPTH on the SYSTEM.CLUSTER.REPOSITORY.QUEUE ?
 DIS Q(SYSTEM.CLUSTER.REPOSITORY.QUEUE) CURDEPTH
 
[B. Set up the Cluster application objects.]
7. Define the Cluster application objects required on your Queue Manager.
 Define all Queues with DEFPSIST(YES) and for all Cluster Queue DEFBIND(OPEN).
 a. Two or more Local Cluster Queue QL.C#
  (exist in more then one Queue Manager = multi instance Q)

 (On QM1 or QM3, 我們先在 QM1 上 DEF 測試一下)
 DEF QL(QL.C1) REPLACE DEFPSIST(YES) DEFBIND(OPEN) CLUSTER(CLUS_A9)

8. Wait until all CHANNELs timed out unpon the DISCINT value.

9. What is now the CURDEPTH on the SYSTEM.CLUSTER.REPOSITORY.QUEUE ?
 DIS Q(SYSTEM.CLUSTER.REPOSITORY.QUEUE) CURDEPTH

[C. Test Clustering.]
10. Prepare a text file with 9 messages.
  Each message should contain a sequence number.
  Use this text file in the following steps with amqsput via standard input.

 Example :
 0001 MSG msg text1............
 0002 MSG msg text22...........
 0003 MSG msg text333..........
 0004 MSG msg text4............
 0005 MSG msg text55...........
 0006 MSG msg text666..........
 0007 MSG msg text7............
 0008 MSG msg text88...........
 0009 MSG msg text999..........


11. From the non-repository Queue Manager, run amqsput to put messages to the
 Cluster Queue that is not defined on that Queue Manager.
 Check which Queue and Queue Manager the messages arrive on.
 a. QL.C# (Cluster Queue)
  # amqsput QL.C1 QM3
  - All messages are put again on one instance of Queue.
  - CHANNEL activity to full repository and to QM where messages are put.
 
12. Set DEFBIND(NOTFIXED) for all Cluster Queues on all Queue Managers in your Cluster.
 Is there any CHANNEL activity in the whole Cluster ?
 - ALTER QL(QL.C1) DEFBIND(NOTFIXED)
or - DEF QL(QL.C1) REPLACE DEFPSIST(YES) DEFBIND(NOTFIXED) CLUSTER(CLUS_A9)
 - Yes, because the change of the DEFBIND attribute has to be communicated.

13. On which instances of the destination Queue do the messages arrive ?
 Is there any CHANNEL activity ?
 - The messages are now distributed between all instances of the Queue QL.C# (Round Robin)
 - Because of remote operations we have CHANNEL activity.

14. Stop one of the Remote Cluster Queue Managers.

15. Again put 9 messages to the Cluster Queue that is not Local on your Queue Manager.
 - the messages are now put to the remaining instances of the Queue QL.C#

16. Restart the previously stopped Cluster Queue Manager.

17. Disable puts on all Queue instances of QL.C# in your Cluster.

18. Again put 9 messages to QL.C#

19. Explain the error indication you get :
 - Reason Code 2268 is returned to the putting application.
  The status PUT(DISABLED) is also know on the Local Queue Manager even all
  instances are located on Remote QMGR in the Cluster.
 - The Cluster Queue entry in the Local Queue Manager is holding this information.
 - DIS Q(QL.C*) CLUSINFO ALL


Full Repository : is a Queue Manager that hosts a complete set of information about every Queue Manager in the cluster.

Partial Repositories : are other Queue Managers in the cluster [that] inquire about the information in the full repositories and build up their own subsets of this information.

※ If MQ Cluster configured with only one Full Repository, it has a single point of failure. The Cluster won't function if that Full Repository goes down. Otherwise, by using multiple Full Repositories, if one Full Repository goes down, the other Full Repositories will take over to manage the Cluster.

※ Each Queue Manager should have at least one Cluster-Sender CHANNEL (CLUSSDR) and one Cluster-Receiver CHANNEL (CLUSRCVR), regardless if the Queue Manager is a full or a partial repository. The only exception to this is for MQ Clusters with only one full repository. This full repository should only have a Cluster-Receiver CHANNEL (CLUSRCVR).

※ A Full Repository pushes its information via a Cluster-Sender CHANNEL (CLUSSDR) to another full repository's Cluster-Receiver CHANNEL (CLUSRCVR). These two CHANNELs should have the same name.


=== Set MQ Cluster Using IBM WebSphereMQ Explorer ===
(E1) Queue Manager Clusters :
 http://publibfp.boulder.ibm.com/epubs/pdf/csqzah07.pdf

(E2) Configuring WebSphereMQ Cluster :

amzshar 發表在 痞客邦 留言(0) 人氣()

2007年12月15日 星期六

.
. IBM WebSphere MQ v6.0
. Chapter 5.3 IBMWebSphereMQ Clusters
.

=== What is a MQ Cluster ? ===
1. A Cluster is a collection of Queue Managers that may be on different platforms,
 but typically serve a common application.
2. Every Queue Manager can make the Queues that they host available to every
 other Queue Manager in the Cluster, without the need (Remote) Queue definitions.
3. Cluster specific objects remove the need for explicit CHANNEL definitions and
 Transmission Queues for each destination Queue Manager.
4. The Queue Managers in a Cluster will often take on the role of a client or a server.
 The servers will host the Queues that are available to the members of the Cluster,
 also running applications that process these messages and generate responses.
 The clients PUT messages to the servers Queues and may receive back response messages.
5. Queue Managers in a Cluster will normally communicate directly with each other,
 although typically, many of the client sysetms will never have a need to communicate
 with other client systems.


=== Cluster Support Objects ===

======================================================
[QM1]————————————
 ∣Cluster Transmission Queue∣
 ∣       [ Cluster-Sender CHANNEL ] →→ [QM3]
 ∣Local Appication Queues  ∣          ↙
 ∣            ∣    [QM2]  ↙
 ∣Cluster Command Queue ∣   ↙     ↙
 ∣        [ Cluster-Receiver CHANNEL ][QM4]
 ∣Cluster Repository Q   ∣
  —————————————

======================================================

1. Cluster Repository (Queue) :
 - A collection of information about the Queue Managers that are members of a Cluster,
  including Queue Manager names, their CHANNELs, the Queues they host and so forth.
 - This repository information is exchanged through messages sent to a Queue called
  SYSTEM.CLUSTER.COMMAND.QUEUE and stored on a Queue with the fixed name
  SYSTEM.CLUSTER.REPOSITORY.QUEUE
 - Repositories may be full or partial - more about this on the next visual. Each Cluster
  Queue Manager must have at least one connection to another Queue Manager that
  owns a full repository.

2. Cluster-Sender CHANNEL :
 - A CHANNEL definition of the TYPE(CLUSSDR) on which a Cluster Queue Manager can
  send messages to another Queue Manager in the Cluster that holds a full repository.
  This CHANNEL is used to notify the repository of any changes of the Queue Manager's
  status, for example the addition or removal of a Queue. It is only used for the initial contact
  with the first full repository Queue Manager. From this one the Local Queue Manager learns
  whatever it needs to know.
 - Note : Application messages will be sent by auto-defined sender CHANNELs that are
  created during operation based on repository information from other Cluster Queue Managers

3. Cluster-Receiver CHANNEL :
 - A CHANNEL definition of the TYPE(CLUSRCVR) on which a Cluster Queue Manager can
  receive messages from within the Cluster. Though the definition of this object a
  Queue Manager is advertised to the other Queue Managers in the Cluster, thus enabling
  them to auto-define their appropriate CLUSSDR CHANNELs for this Queue Manager.
 - You need at least one Cluster-Receiver CHANNEL for each Cluster Queue Manager.

4. Cluster Transmission Queue :
 - All the messages from the Queue Manager to any other Queue Manager in the Cluster
  are locally put to this Queue named SYSTEM.CLUSTER.TRANSMIT.QUEUE
 - It must exist in each Cluster Queue Manager

5. Cluster Queue :
 - A Cluster Queue is a Queue that is hosted by a Cluster Queue Manager and made available
  to Queue Managers in the Cluster. The Local Queue is either preexisting or created on the
  Local Queue Manager and to play a role in the Cluster the Local Queue definition specifies
  the Cluster name of the definition. The other Queue Managers can see this Queue and use
  it to put messages to it without the use of Remote Queue definition. The Cluster can be
  advertised in more than one Cluster.


=== More About Repositories ===
1. Each Cluster Queue Manager has to have a Local Queue called
 SYSTEM.CLUSTER.REPOSITORY.QUEUE where all Cluster related information is stored.
2. At least one (but for availability reasons preferably 2 or more) Cluster Queue Managers have
 to hold full repositories; that means a complete set of information about every
 Queue Manager in the Cluster.
3. For each Cluster Queue Manager, a Cluster-Sender CHANNEL has to be predefined that
 connects to one of the repository Queue Managers.
4. Repository Queue Managers (sometimes simply called repositories) must be fully
 interconnected with each other and positioned in the network so as to give a
 high level of availability.
5. Normal Queue Managers build up and maintain a partial repository that contains information
 about those Queue Managers and Queue that are of iterest to it. This information may be
 updated and extended during operation through inquirles of a full repository.

※ 每個 Cluster 至少兩個 Queue Manager.
※ 每個 Cluster 至少兩個 Full Repository.
※ 一個 QM 只能擔任某一個 Cluster 的 Fully Repository.

※ 當用 MQ-Explorer ---> Cluster-Receiver CHANNEL connection name : hostname(1414)


=== Setting Up a Cluster ===
1. QM1 is made a full repository Queue Manager for the Cluster named CLUS_A3 :
 ALTER QMGR REPOS(CLUS_A3)
 
 DEFINE CHANNEL(TO.QM1) +
  CHLTYPE(CLUSRCVR) CONNAME(...) +
  CLUSTER(CLUS_A3) DESCR('To other Repository')

 
 DEFINE QLOCAL(QUEUE1) CLUSTER(CLUS_A3)

2. A Queue Manager may be associated to more than one Cluster at time.
 The same is true for Queues and CHANNELs :
 - In this case a NAMELIST object has to be created with multiple Cluster names as
  single entries.
 - Then with all DEFINE commands , the name of this namelist has to be referenced
  instead of the Cluster name, and the REPOS attribute of the ALTER QMGR comand
  changes to REPONL.


=== WorkLoad Balancing Attributes === (CLWL)

CLUSTER "CLUS_A3"
==========================================================================
 MQOPEN QNAME(TARGET.Q)
  ↓        ↗ [QM2 ] TARGET.Q ↘
 [ QM1 ] →CLW EXIT → [ QM3 ] TARGET.Q → DEF QL(TARGET.Q) CLUSTER(CLUS_A3)
           ↘ [QM4 ] TARGET.Q ↗
 ALTER QMGR CLWEXIT(myexit)
==========================================================================

1. Queue attributes :
 - CLWLPRTY  (Cluster WorkLoad priority)
 - CLWLUSEQ  (use Local Queue)

2. Queue Manager attributes :
 - CLWLUSEQ  (use Local Queue)

3. CHANNEL attributes :
 - CLWLPRTY  (priority)
 - CLWLWGHT  (weight to a CHANNEL, 1 ~ 99)
 - NETPRTY   (network priority)


=== Continuous Operations ===
1. MQOPEN(TARGET.Q) MQOO_BIND_NOT_FIXED, MQPUT , MQPUT ... MQPUT
MQOO_BIND_NOT_FIXED 才能 Load Balance, 若是 On Open 就只會固定送到某QM

2. The Queue Attribute DEFBIND determaines whether or not rerouting will be performed
 while a Queue is opened.
 - DEFBIND(NOTFIXED) with a round-robin distribution of messages to all TARGER.Qs
  in the Cluster.
 - DEFBIND(OPEN) the destination Q is selected at MQOPEN time and will not be
  changed until MQCLOSE.


=== Cluster Related Queue Manager Attributes ===
1. REPOS(ClusterName)
2. REPOSNL(NamelistName)
3. CLWLDATA(32 char max string)
4. CLWLEXIT(Cluster WorkLoad exit name)
5. CLWLLEN(max # of bytes of message data passwd to Cluster WorkLoad exit)


=== Controlling Clusters - Cluster Commands ===
1. SUSPEND QMGR - Removes a QM from a Cluster temporarily.
2. RESUME QMGR - Reinstates a SUSPENDed QM.
3. REFRESH CLUSTER(clustername) - 強制讓Cluster Sync一次.
4. RESET CLUSTER(clustername)  - QM退出, 當亂掉時, 清除用
5. RESET CLUSTER(clustername)
  QMNAME(
QMname) ACTION(FORCEREMOVE) QUEUES(NO)
6. RESET CLUSTER(clustername)
  QMID(
QMid) ACTION(FORCEREMOVE) QUEUES(NO)



=== Controlling Clusters - DISPLAY CLUSQMGR ===
1. DISPLAY CLUSQMGR(*) CLUSTER(name) CHANNEL(name)
returns :
 CLUSDATE / CLUSTIME
  - the date and time when the Definition became available to the Local QMGR.
 DEFTYPE - how the luster QMGR was defined.
 QMTYPE - function of QMGR in Cluster, provides FULL or PARTIAL repository service.
 QMID - internalygenerated unique QMGR name.
 STATUS - the current status of CHANNEL for QMGR.
 SUSPEND - yes or no as a result of a SUSPEND QMGR cmd.

2. The DISPLAY CLUSMGR command returns cluster information about Queue Manager
 in a Cluster which is stored in the Local SYSTEM.CLUSTER.REPOSITORY.QUEUE
Definition Type may be :
 CLUSSDR - as a Cluster-Sender CHANNEL from an explicit definition.
 CLUSSDRA - as a Cluster-Sender by auto-definition.
 CLUSSDRB - as a Ckuster-Sender CHANNEL, both from an explicit definition
        and by auto-definition.
 CLUSRCVR - as a Cluster-Receiver CHANNEL.


=== Cluster Related Queue Considerations ===
1. Special DISPLAY option :
 DISPLAY QUEUE CLUSINFO
2. Cluster Alias Queue :
 DEF QA(PUBLIC) TARGET(LOCAL.Q) CLUSTER(ITALY)
3. Cluster Queue Manager Aliases :
 DEF QR(ROME) RNAME() RQMNAME(PISA) XMITQ(XQ) CLUSTER(ITALY)

4. Most types of Queues may be defined as Cluster Queues, and as a consequence,
 be advertised to all Queue Managers in the Cluster, just as for Local Queues.
 - Alias Queues may be made available to the Cluster simply by adding
         the CLUSTER keyword to the definition.
 - Queue Manager Aliases advertised to the Cluster may be of the same value as
         for traditional distributed Queueing.
 - Remote Queues are not intend to be advertised to a Cluster, because one of the
         benefits of Clusters is that Remote Queue definitions are no loger
         required. Remote Queues, however, can have a Cluster attribute.
         They can be used to attach a Queue Manager that does not
         support clusting.
 - Model Queues (and hence Temporary Dynamic Queues) cannot have a Cluster
         attribute.

5. Effect of ALTERing Queue definitions :
 - ALTER QUEUE(XXX) PUT(INHIBITED)
  will stop messages being put to that instance of a Queue and also mark it as being
  put inhibited throughout the Cluster. If applicable, this will cause messages to be
  sent to other instances of the queue.
 - ALTER QUEUE(XXX) CLUSTER(' ')
  will take a Queue out of its Clusters and stop other Queue Managers from sending
  messages to it but still allow messages to be put to it from the Local Queue Manager.

amzshar 發表在 痞客邦 留言(0) 人氣()

2007年12月15日 星期六
 

.
. IBM WebSphere MQ v6.0
. Ex 4 - IBM WebSphere MQ Distributed Queueing.
. 讓我們來練習一下 Distributed Queueing.
. 兩個Queue Managers之間的連接
.
=== Exercise 4 : Distributed Queuing ===
What we will do :
A. Create the objects, required for Distributed Queue.
B. Configure and refresh the inet daemon.
C. Start message CHANNELs manually.
D. Create the required application objects.
E. Test the message flow using sample applications.
F'. Use Triggering in a distributed environment.
G'. Use the CHANNEL Initiator to start messages.

======================================================
[A. Create and configure the required connection objects.]
1. Use for the exercise again the Queue manager with circular logging.
 # crtmqm QMC01  (Local Qmgr , 送端SDR)
 # crtmqm QMC02  (Remote Qmgr , 接收端RCVR)
 
 # strmqm QMC01
 # strmqm QMC02

2. Be sure that NO CHANNEL Initiator (runmqchi) is running.

3. Define the WebSphereMQ objects required for a connection between your
 Local Queue Manager and the Queue Manager of your partner team.

 The necessary objects are following :
 a. A definition of a message CHANNEL (type=Sender)
  - Channel name = QMC01.TO.QMC02
  - Protocol = TCP/IP
  - Network address = 'Host2(9002)'
  - Transmission Queue name = name of the Remote QMGR
   # runmqsc QMC01
   DEF CHL(QMC01.TO.QMC02) CHLTYPE(SDR) REPLACE +
    TRPTYPE(TCP) CONNAME(
'Host2(9002)') XMITQ(XQMC02)

   Check => DISPLAY CHSTATUS(QMC01.TO.QMC02)
 b. A definition of a message CHANNEL (type=Receiver)
  The attribute should match to sender CHANNEL of the partner team.
   # runmqsc QMC02
   DEF CHL(QMC01.TO.QMC02) CHLTYPE(RCVR) REPLACE TRPTYPE(TCP)
   ps : 原講義這邊寫 QMC02.TO.QMC01 是不對的,應該要一致才行。 囧rz
 c. A Transmission Queue with the same name as the Remote Queue Manager (QMC02).
   DEF QL(XQMC02) REPLACE USAGE(XMITQ)
 d. A Dead Letter Queue named DLQ :
   DEF QL(DLQ) REPLACE
 e. Change the Queue Manager to use the Dead Letter Queue :
   ALTER QMGR DEADQ(DLQ)

4. Use runmqsc to create the WebSphereMQ objects. (cont.)

[B. Configure and activate the required TCP Listener function.]
5. Start the Listener Listening on 9002.
 # runmqlsr -m QMC02 -t TCP -p 9002

[C. Test and Start the connection.]
6. Ping the message CHANNEL from the sender end to test the CHANNEL definitions.
  # runmqsc QMC01
  PING CHL(QMC01.TO.QMC02)
  ( 成功的話, => AMQ8020: 連通測試 WebSphere MQ 通道完成。)
  ( 失敗的話, => AMQ9520: 通道沒在遠方定義。 )

7. Start the message CHANNEL using runmqchl and verify it it working.
 # runmqchl -c QMC01.TO.QMC02 -m QMC01

[D. Create the required application objects.]
8. Define the application Queues on your Queue Manager :
 a. ( on QMC02 ) Redefine the Local Queue QL.A :
   DEF QL(QL.A) REPLACE
 b. ( on QMC01 ) Create Local Definition of a Remote Queue for the QL.A on QMC02 :
   DEF QR(QRMT02) REPLACE +
    RNAME(QL.A) RQMNAME(QMC02) XMITQ(XQMC02)

9. Use runmqsc to create the WebSphereMQ objects. (cont.)

[E. Test distributed Queueing.]
10. Use amqsput to send messages to the Queue QL.A on the partner Queue Manager :
 # amqsput QRMT02 QMC01

11. Use amqsget or amgsbcg to check for successful arrival of messages from your partner team.
 # amqsget QL.A QMC02 or
 # amqsbcg QL.A QMC02 or
 check the CURDEPTH attribute of the target Queue.
  → ( on QMC02 ) DIS QL(QL.A) CURDEPTH

12. If the messages do not arrive, investigate the possible causes and solve the problem by :
 - Local transmssion Queue is Empty ?
 - Message CHANNEL is running ?
 - Dead Letter Queue of target Queue Manager is Empty ?
 - Inspect the error in both Queue Managers!


======================================================
[F'. Use Triggering in a distributed environment.]
Set up Remote Triggering :
Use the sample program amqsreq to send request messages to the Queue QL.A on the partner Queue Manager QMC02. The Target Queue should be enabled for triggering so that the sample program amqsech is started automatically in order to generate reply messages which are subsequently received by amqsreq.

1. Reactivate the Trigger function for QL.A to handle request messages send to
 your QL.A now by the partner team.
 DEFINE QLOCAL(QL.INITQ_AP) REPLACE

 DEFINE PROCESS(PR.ECHO) REPLACE +
  APPLICID('/mqmtop/samp/bin/amqsech')
 // UNIX Systems

 DEFINE PROCESS(PR.ECHO) REPLACE +
  APPLICID('amqsech')
         // Windows Systems

 DEFINE QMODEL(QM.A_REPLY) REPLACE (這個在QMC01)

 ALTER QL(QL.A) TRIGGER TRIGTYPE(FIRST)
  INITQ(QL.INITQ_AP) PROCESS(PR.ECHO)

2. Restart the Trigger Monitor :
 # runmqtrm -q QL.INITQ_AP -m QMC02

3. Put request messages on QL.A in your partner team Queue Manager.
 # amqsreq QRMT02 QMC01 QM.A_REPLY

4. Check for a reply of each request message.


[G'. Use the CHANEL Initiator to start messages.]
5. Define and modify WebSphereMQ objects to setup automatic CHANNEL operation using a CHANNEL Initiator.

6. Define a CHANNEL Initiator Queue named QL.INITQ_CH :
 DEF QL(QL.INITQ_CH)

7. Enable the Transmission Queue for Triggering :
 ALTER QL(XQMC02) TRIGGER TRIGTYPE(FIRST) +
  TRIGDATA(QMC01.TO.QMC02) INITQ(QL.INITQ_CH)


8. Set the disconnect interval of the CHANNEL to 30 seconds (default is 6000) :
 ALTER CHL(QMC01.TO.QMC02) CHLTYPE(SDR) DISCINT(30)

9. Start the CHANNEL Initiator :
 # runmqchi -m QMC01 -q QL.INITQ_CH

10. Stop and restart the Sender CHANNEL using runmqchi :
 # STOP CHL(QMC01.TO.QMC02)
 # runmqchl -m QMC01 -c QMC01.TO.QMC02

11. Wait until the CHANNEL is terminated by the DISCINT(30).
 (The RUNMQCHL window will disappear after DISCINT(30) elapsed.)
 If the WebSphereMQ Listener is used, you can also watch the Listener window
 on the partners Queue Manager.

  5724-H72 (C) Copyright IBM Corp. 1994, 2004. ALL RIGHTS RESERVED.
  2008/1/15 15:20:47 通道 'QMC01.TO.QMC02' 正在啟動
  2008/1/15 15:21:18 切斷間隔過期。
  2008/1/15 15:21:18 通道程式 'QMC01.TO.QMC02' 已正常終止。

12. Put some messages on the Queue QRMT02 using amqsput :
 # amqsput QRMT02 QMC01

13. The CHANNEL should restart automatically.

14. If the WebSphereMQ Listener is used, you can also watch the Listener window
 on the partner Queue Manager.

15. If the messages do not arrive, investigate the possible cause and solve the problem.

16. Put some messages on the Queue QRMT02 using amqsreq :
 # amqsreq QRMT02 QMC01 QM.A_REPLY

17. Both CHANNELs should restart automatically.

18. Proof that by checking the CHANNEL status on both sides :
 (Be sure that the DISCINT(30) has not elapsed. )
 DIS CHS(QM.*) all

19. When using the WebSphereMQ Listener, you can also watch the Listener windows
 on both Queue Managers.

20. If the messages do not arrive, investigate the possible cause and solve the problem.

amzshar 發表在 痞客邦 留言(0) 人氣()