Admins

This section is dedicated to various material for admins:
monitoring, troubleshooting, optimising...


backup

Before running the following script you need to setup a chore with a TI process to execute that line in Prolog:

SaveDataAll;

/!\ Avoid using the SaveTime setting in tm1s.cfg as it could conflict with other chores/processes trying to run at the same time

here is the DOS backup script that you can schedule to backup your TM1 server

netsvc /stop \\TM1server "TM1 service"

sleep 300

rmdir "\\computer2\path\to\backup" /S /Q
mkdir "\\computer2\path\to\backup"
xcopy "\\TM1server\path\to\server" "\\computer2\path\to\backup" /Y /R /S
xcopy "\\TM1server\path\to\server\}*" "\\computer2\path\to\backup" /Y /R /S
netsvc /start \\TM1server "TM1 service"


Documenting TM1

section dedicated to documenting TM1 with different techniques and tools.


a closer look at chores

if you ever loaded a .cho file in an editor this is what you would expect:

534,8
530,yyyymmddhhmmss ------ date/time of the first run
531,dddhhmmss ------ frequency
532,p ------ number p of processes to run
13,16
6,"process name"
560,0
13,16
533,x ------ x=1 active/ x=0 inactive

In the 9.1 series it is possible to see from the Server Explorer which chores are active from the chores menu.
However this is not the case in the 9.0 series, also it is not possible to see when and how often the chores are running unless you deactivate them first and edit them. Not quite convenient to say the least.
From the specs above, it is easy to set rules for a parser and deliver all that information in a simple report.
So the perl script attached below is doing just that: listing all chores on your server, their date/time of execution, frequency and activity status.

Procedure to follow:
1. install perl
2. save chores.pl in a folder
3. doubleclick on chores.pl
4. a window opens, enter the path to your TM1 server data folder there
5. open resulting file chores.txt created in the same folder as chores.pl

Result:

ACT /     date-time     /    frequency   / chore name
 X   2005/08/15 04:55:00 007d 00h 00m 00s currentweek
 X   2007/04/28 05:00:00 001d 00h 00m 00s DailyS
 X   2007/05/30 05:50:00 001d 00h 00m 00s DAILY_UPDATE
 X   2007/05/30 05:40:00 001d 00h 00m 00s DAILY_S_UPDATE
 X   2005/08/13 20:00:05 007d 00h 00m 00s eweek
 X   2006/04/06 07:30:00 001d 00h 00m 00s a_Daily
 X   2007/05/30 06:05:00 001d 00h 00m 00s SaveDataAll
 X   2007/05/28 05:20:00 007d 00h 00m 00s WEEKLY BUILD
 X   2005/05/15 21:00:00 007d 00h 00m 00s weeklystock
     2007/05/28 05:30:00 007d 00h 00m 00s WEEKLY_LOAD


a closer look at subsets

if you ever loaded a .sub file (subset) in an editor this is the format you would expect:

283,2 start
11,yyyymmddhhmmss creation date
274,"string" name of the alias to display
18,0 ?
275,d d = number of characters of the MDX expression stored on the next line
278,0 ?
281,b b = 0 or 1 "expand above" trigger
270,d d = number of elements in the subset followed by the list of these elements, this also represents the set of elements of {TM1SubsetBasis()} if you have an MDX expression attached

These .sub files are stored in cube}subs folders for public subsets or user/cube}subs for private subsets.

Often a source of discrepancy in views and reports is the use of static subsets. For example a view was created a while ago, displaying a bunch of customers, but since then new customers got added in the system and they will not appear in that view unless they are manually added to the static subset.

Based on the details above, one could search for all non-MDX/static subsets (wingrep regexp search 275,$ in all .sub files) and identify which might actually need to be made dynamic in order to keep up with slowly changing dimensions.


Beam me up Scotty: 3D Animated TM1 Data Flow

Explore the structure of your TM1 system through the Skyrails 3D interface:

If you do not have flash, you can have a look at some screenshots
/!\ WARNING: your eyeballs may pop out!

This is basically the same as the previous work with graphviz, except this time it is pushed to 3D, animated and interactive.

So the visualisation engine Skyrails is developed by Ph.D. student Yose Widjaja.
I only wrote the TM1 parser and associated Skyrails script to port a high level view of the TM1 Data flow into the Skyrails realm.

How to proceed:

.download and unzip skyrails beta 2nd build
.download and unzip TM1skyrails.zip (attachment below) in the skyraildist2 folder
.in the skyraildist2 folder, doubleclick TM1skyrails.pl (you will need perl installed unless someone wants to provide a compiled .exe of the script with the PAR module)
.enter the path to (a copy of) your TM1 Data folder
.skyrails window opens, click on the "folder" icon and click TM1

If you don't want to install perl, you can still enjoy a preview of the Planning Sample that comes out of the box. Just double-click on raex.exe.

w,s,a,d keys to move the camera

Quick legend:
orange -- cube
blue -- process
light cyan -- file
red -- ODBC source
green sphere -- probably reference to an object that does not exists (anymore)
green edge: intercube rule flow
red edge: process (CellGet/CellPut) flow

Changelog:
1.1 a few mouse gestures added (right click on a node then follow instructions) to get planar (like graphviz) and spherical representations.
1.2 - edges color coded, see legend above
- animated arrows
- gestures to display different flows (no flow/rules only/processes only/all flow)


Dimensions updates mapping

When faced with a large "undocumented" TM1 server, it might become hard to see how dimensions are being updated.

The following perl/graphviz script creates a graph to display which processes are updating dimensions.

That script dimflow.pl below is looking for functions updating dimensions (DimensionElementInsert, DimensionCreate...) in .pro files in the TM1 datafolder and maps it all together.
Unfortunately it does not take into account manual editing of dimensions.

This is the result:

dimensions updates
Legend:
processes = red rectangles
dimensions = blue bubbles

The above screenshot is probably a good example of why such map can be useful: you can see immediately that several processes are updating the same dimensions.
It might be necessary to have several processes feeding a dimension, though it will be good to review these processes to make sure they are not redundant or damaging each other's effects.

Procedure to follow:
1. install perl and graphviz
2. download the script below and rename it to .pl extension
3. doubleclick on it
4. enter the path to your TM1 Data folder (\\servername\datafolder)
5. This will create 2 files "dim.dot" and "dim.gif" in the same folder as the perl script
6. Open dim.gif with any browser / picture editor


graphing TM1 data flow

Attached is the new version of a little parser in perl (free) that will create a text file for graphviz (free too) out of your .pro and .rux files and then generate a graph of the data flow in your TM1 server...

data flow
(the image has been cropped and scaled down for display, the original image is actually readable)

legendlegend

ellipses = cubes, rectangles = processes
red = cellget, blue = cellput, green = inter-cube rule

Procedure to follow:

1. install perl and graphviz
2. put the genflow perl script in any folder, make sure it has the .pl extension (not txt)
3. doubleclick on it
4. Enter the path to your TM1 Data folder such as: \\servername\datafolder where \\servername\datafolder is the full file path to your TM1 data folder
5. Hit return and wait until the window disappears
This creates 2 files: "flow.dot" and "flow.gif" in the same folder as the perl script
6. Open "flow.gif" in any browser or picture editor

Changelog
1.3:
.CellPut parsing fix
.cubes/processes names displayed 'as is'

1.4:
.display import view names along the edges
.display zeroout views
.sources differentiated by shape

This is still quite experimental but this could become useful to view at a glance high-level interactions between your cubes.


indexing subsets

Maintaining subsets on your server might be problematic. For example you wanted to delete an old subset that you found out to be incorrect and your server replied this:
delete subset failed
This is not quite helpful, as it does not say which views are affected and need to be corrected.

Worse is that, as Admin, you can delete any public subset as long as it is not being used in a public view. If it is used in a user's private view, it will be deleted anyway and that private view might become invalid or just won't load.

In order to remediate to these issues, I wrote a little perl script, attached below, that will:
.index all your subsets, including users' subsets.
.display all unused subsets (i.e. not attached to any existing views)
From the index, you can find out right away in which views a given subset is used.

I suppose the same could be achieved through the TM1 API though you would have to log as every user in turn in order to get a full index of all subsets.

Run from a DOS shell: perl indexsubset.pl \\path\to\TM1\server > mysubsets.txt


processes history

On a large undocumented and mature TM1 server you might find yourself with a lot of processes and you wonder how many of them are still in use or the last time they got run.

The following script answers these questions for you.

One could take a look at the creation/modification time of the processes in the TM1 Data folder however you would have to sit through pages of the tms1msg.log to get the history of a given process which is what the script below does.

Procedure to follow for TM1 9.0 or 8.x
1. install perl (free)
2. save loganalysis.pl.txt in a folder as loganalysis.pl
3. stop your TM1 service (necessary to beat the windows lock on tm1smsg.log)
4. copy the tm1smsg.log into the folder where loganalysis.pl is
5. start your TM1 service
6. double click loganalysis.pl

Procedure to follow for TM1 9.1
1. install perl (free)
2. save loganalysis.pl.txt in a folder as loganalysis.pl
3. copy the tm1server.log into the folder where loganalysis.pl is
4. double click loganalysis.pl

That should display the newly created processes.txt in notepad and that should look like the following:

First, all processes sorted by name and the last run time, user and how many times it ran.

processes by name:
2005load run 2006/02/09 15:02:33 user Admin [x2]
ADMIN - Unused Dimensions run 2006/04/26 14:02:58 user Admin [x1]
Branch Rates Update run 2006/10/19 15:23:29 user Admin [x1]
BrandAnalysisUpdate run 2005/04/11 08:09:13 user Admin [x33]
....

Second, all processes sorted by last run time, user and how many times it ran.

processes by last run: 
2005/04/11 08:09:13 user Admin ran BrandAnalysisUpdate [x33]
2005/04/11 10:26:29 user Admin ran LoadDelivery [x1]
2005/04/19 08:44:22 user Admin ran UpdateAntStockage [x19]
2005/04/26 14:18:17 user Admin ran weeklyodbc [x1]
2005/05/12 08:34:16 user Admin ran stock [x1]
2005/05/12 08:37:59 user Admin ran receipts [x1]
....

I do not know what these "BrandAnalysisUpdate" or "LoadDelivery" processes do but I guess nobody is going to miss them.


The case against single children

I came across hierarchies holding single children.
While creating a consolidation over only 1 element might make sense in some hierarchies, some people just use consolidations as an alternative to aliases.
Either they just don't know they exist or they come from an age when TM1 did not have aliases yet.

The following process will help you identify all the "single child" elements in your system.
This effectively loops through all elements of all dimensions of your system, so this could be reused to carry out other checks.

#where to report the results
Report = '\\tm1server\reports\single_children.csv';

#get number of dimensions on that system
TotalDim = Dimsiz('}Dimensions');

#loop through all dimensions
i = 1;
While (i <= TotalDim);
  ThisDim = DIMNM('}Dimensions',i);

  #foreach dimension
  #loop through all their elements 
  j = 1;
  While (j <= Dimsiz(ThisDim));
    Element = DIMNM(ThisDim,j);
    #report the parent if it has only 1 child  
    If( ELCOMPN(ThisDim, Element) = 1 );
      AsciiOutput(Report,ThisDim,Element,ELCOMP(ThisDim,Element,1));
    Endif;
    #report if consolidation has no child!!!
    If( ELCOMPN(ThisDim, Element) = 0 & ELLEV(Thisdim, Element) > 0 );
      single = single + 1;
      AsciiOutput(Report,ThisDim,DIMNM(ThisDim,j),'NO CHILD!!');
    Endif;
  j = j + 1;
  End;
  i = i + 1; 
End;


TM1 Documenter (a Documenting tool)

Hi,

Just FYR, New Version of TM1 Documenter Version2.5 (A documenting tool) has been released.
I believe, it will very useful to TM1 Consultants & Developers, and the Organisations as well.
Myself being a TM1 Consultant, know the pain-areas of a Consultant. Having a Java background; I developed this software.

Usually documentation task takes about 20-40 days, thereby blocking a valuable resource (TM1 Developer) for such less-important task. By using this software, the documentation task will be completed in just few clicks. Moreover; during development or support task, when the model becomes huge or complex sometimes people lose the exact data flow of the model (as Rule sets are difficult to understand). Some times a developer ,by mistake, deletes an object (be it a cube, dimension, element, or a subset) that is providing is data to some other object. TM1 do not disallow to do so, but the model goes on a toss. Here the Object Dependency Checker comes to rescue.

Software Summary: It has in all 2 Main Modules. I have introduced one new module in it, i.e. Object Dependency Checker

Following are the details:

Documenter Module:
1. Detailed information / summary of the cubes.
2. Cube Sizing.
3. Views info (optional).
4. Rules info (optional).
5. Detailed information / summary of the Dimensions.
6. Subsets info (optional).
7. Export TI Process list.
8. Export Dimension Attributes
9. Export Element Attributes
10.Output html file with index on left side & Object details in center pane that make easy to navigate the objects.

Object Dependency Checker Module:
1. Cube Dependency: Select a cube & see all the other cubes depending on this. (i.e. view all cubes that are been sourced by this cube)
2. Dimension Dependency: Select a DIM & see all cubes & rules using this DIM. It also displays if this Dim is being used as Picklist anywhere.
3. Element Dependency: Select an Element from Dim & see all cubes using , specifically, this element Data.
4. Subset Dependency: Select a Subset & see all the views using this subset. It also displays if this Subset is being used as Picklist anywhere.
5. Cube-Element Dependency: This is detailed level element dependency. This module checks if the data of selected element of the selected cube is being used anywhere.
6. Export option available for all above sub-modules.

so here it is.
You can download it from:- (Please install this software with ADMIN RIGHTS)
http://www.mediafire.com/?98q9a8tm9nu0w01
or
https://rapidshare.com/files/3609668715/TM1_Documenter_Version2.5.rar

Your suggestions & queries welcome. Please provide me feedback of this tool.

Krishna
krishna.dixit.sp@gmail.com


dynamic tm1p.ini and homepages in Excel

Pointing all your users to a single TM1 Admin host is convenient but not flexible if you manage several TM1 services.
Each TM1 service might need different settings and you do not necessarily want users to be able to see the development or test services for example.

Attached below is an addin that logs the users on a predefined server and settings as shown on that graph:
login schema

With such a setup, you can basculate your users from one server to the other without having to tinker the tm1p.ini files on every single desktop.
This solution probably offers the most flexibility and maintainability as you could add conditional statements to point different groups of users to different servers/settings and even manage and retrieve these settings from a cube through the API.

This addin also includes:
- previous code like the "TM1 freeze" button
- it loads automatically an excel spreadsheet named after the user so each user can customise it with their reports/links for a faster access to their data.



The TM1 macro OPTSET, used in the .xla below, can preconfigure the tm1p.ini with a lot more values.

The official TM1 Help does not reference all the available values though.
Here is a more complete list, you can actually change all the parameters displayed in the Server Explorer File->Options with OPTSET:
AdminHost
DataBaseDirectory
IntegratedLogin
ConnectLocalAtStartup
InProcessLocalServer
TM1PostScriptPrinter
HttpProxyServerHost
HttpProxyServerPort
UseHttpProxyServer
HttpConnectorUrl
UseHttpConnector

and more:

AnsiFiles
GenDBRW
NoChangeMessage
DimensionDownloadMaxSize

this also applies to OPTGET

WARNING:
Make sure that all hosts in the AdminHost line are up and working otherwise Architect/Perspectives will hang for a couple of seconds while trying to connect to these hosts.


free utilities

If you are stuck with a Windows operating system, you might need some tools for basic needs, these are all free:

for brute/regexp search in .pro and .rux files: WinGrep

finding out changes in different versions of files: WinMerge
or the PSPad editor

Windirstat is quite a useful visualisation tool to clean up your drives/servers. A picture is worth a thousand words, take a look at the screenshot.
It was first developed for KDE: kdirstat

SnagIt can capture scrolling long web pages, extract text from windows, annotate images and more. Read the detailed SnagIt review.
download here
free license subscription here (thanks Eric!)

A few more tools from lifehacker

And finally, not a desktop tool per se, but quite useful to share files quick, easy and secure: http://drop.io


How to monitor TM1 connections using a Java application

Hello,

download --> TM1Shell.rar.
This program is a shell that allows you to connect to a TM1 server running.

Then it starts two threads :
- One write in a logfile the log activity on the server and save it;
- The second allows you to have a description of dimensions, cubes, elements... and to export it into an Excel File.

Use cmd to draw a list of available commands.
The project is under construction, any suggestions welcome. A new version is coming out soon.
If you have any questions, you can contact me : lucas.joignaux@gmail.com


Locking and updating locked cubes

Locking cubes is a good way to insure your (meta)data is not tampered with.
Right click on the cube you wish to lock, then select Security->Lock.
This now protects the cube contents from TI process and (un)intentional admins' changes.
However, this makes updating your (meta)data more time consuming, as you need to remove the lock prior to updating the cube.

Hopefully, the function CubeLockOverride allows you to automate that step. The following TI code demonstrates this,
.lock a cube
.copy/paste the code in a TI Prolog tab
.change the parameters to fit your system
.execute:

# uncomment / comment the next line to see the process win / fail
CubeLockOverride(1);

Dim = 'Day';
Element = 'Day 01';
Attribute = 'Dates 2010';
NewValue = 'Saint Glinglin';

if( CellIsUpdateable('}ElementAttributes_' | Dim, Element, Attribute) = 1);
    AttrPutS(NewValue, Dim, Element, Attribute);
else;
    ItemReject('could not unlock element ' | Element | ' in ' | Dim);
endif;

Note: CubeLockOverride is in the reserved words listed in the TM1 manual but its function seems to be only documented in the 8.4.5 releases notes.
This works from 8.4.5 to the most recent 9.x series


managing the licences limit

One day you might face or already faced the problem of too many licences being in use and as a result additional users cannot log in.
Also on a default setup, nothing stops users from opening several tm1web/perspectives sessions and reach the limit of licenses.
So in order to prevent that:

.open the cube }ClientProperties, change all users' MaximumPorts to 1
.in your tm1s.cfg add that line, it will timeout all idle connections after 1 hour:
IdleConnectionTimeOutSeconds = 3600

To see who's logged on:
.use tm1top
or
.open the cube }ClientProperties
all logged users have the STATUS measure set to "ACTIVE"
or
.in server manager (rightclick server icon), click "Select clients..." to get the list

To kick some users without taking the server down:
in server explorer right click on your server icon -> Server Manager
select disconnect clients and "Select clients..."
then OK and they are gone.

Unfortunately there is still no workaround for the admin to log in when users take all the slots allowed.


monitor rules and processes

Changing a rule or process in TM1 does not show up in the logs.
That is fine as long as you are the only Power User able to tinker with these objects.
Unfortunately, it can get out of hand pretty quickly as more power users join the party and make changes that might impact other departments data.
So here goes a simple way to report changes.

The idea is to compare the current files on the production server with a backup from the previous day.

You will need:
.access to the live TM1 Data Folder
.access to the last daily backup
.a VB script to email results you can find one there
.diff, egrep and unix2dos, you can extract these from that zip package and http://www.efgh.com/software/unix2dos.exe
or download directly the attachments below (GNU license)

Dump these files in D:\TM1DATA\BIN for example, or some path accessible to the TM1 server.

In the same folder create a diff.bat file, replace all the TM1DATA paths to your configuration:

@echo off
cd D:\TM1DATA\BIN
del %~1
rem windows file compare fc is just crap, must fallback to the mighty GNU binutils
diff -q "\\liveserver\TM1DATA" "\\backupserver\TM1DATA" | egrep "\.(pro|RUX|xdi|xru|cho)" > %~1
rem make it notepad friendly, i.e. add these horrible useless CR chars at EOL, it's 2oo8 but native windows apps are just as deficient as ever
unix2dos %~1
rem if diff is not empty then email results
if %~z1 GTR 1 sendattach.vbs mailserver 25 from.email to.email "[TM1] daily changes log" " " "D:\TM1DATA\BIN\%~1"

Now you can set a TM1 process with the following line to run diff.bat and schedule it from a chore.

ExecuteCommand('cmd /c D:\TM1DATA\BIN\diff.bat diff.txt',0);

Best is to run the process at close of business, just before creating the backup of the day.

And you should start receiving emails like these:

Files \\liveserver\TM1DATA\Check Dimension CollectionCat.pro and \\backupserver\TM1DATA\Check Dimension CollectionCat.pro differ
Files \\liveserver\TM1DATA\Productivity.RUX and \\backupserver\TM1DATA\Productivity.RUX differ
Only in \\liveserver\TM1DATA: Update Cube Branch Rates.pro

In this case we can see that the rules from the Productivity cube have changed today.


monitoring chores by email

Using the script in the Send Email Attachments article, it is possible to set it up to automatically email the Admin when a process in a chore fails.

Here is how to proceed:
1. setup admin email process
First we create a process to add an email field to the ClientProperties cube and add an email to forward to the Admin.

1.1 create a new process
---- Advanced/Parameters Tab, insert this parameter:
AdminEmail / String / / "Admin Email?"

--- Advanced/Prolog tab
if(DIMIX('}ClientProperties','Email') = 0);
DimensionElementInsert('}ClientProperties','','Email','S');
Endif;

--- Advanced/Epilog tab
CellPutS(AdminEmail,'}ClientProperties','Admin','Email');

1.2 Save and Run

2. create monitor process

---- Advanced/Prolog tab
MailServer = 'smtp.mycompany.com';
LogDir = '\\tm1server\e$\TM1Data\Log';
ScriptDir = 'E:\TM1Data\';

NumericGlobalVariable( 'ProcessReturnCode');

If(ProcessReturnCode <> ProcessExitNormal());

If(ProcessReturnCode = ProcessExitByChoreQuit());
Status = 'Exit by ChoreQuit';
Endif;
If(ProcessReturnCode = ProcessExitMinorError());
Status = 'Exit with Minor Error';
Endif;
If(ProcessReturnCode = ProcessExitByQuit());
Status = 'Exit by Quit';
Endif;
If(ProcessReturnCode = ProcessExitWithMessage());
Status = 'Exit with Message';
Endif;
If(ProcessReturnCode = ProcessExitSeriousError());
Status = 'Exit with Serious Error';
Endif;
If(ProcessReturnCode = ProcessExitOnInit());
Status = 'Exit on Init';
Endif;
If(ProcessReturnCode = ProcessExitByBreak());
Status = 'Exit by Break';
Endif;

vbody= 'Process failed: '|Status| '. Check '|LogDir;
Email = CellGetS('}ClientProperties','Admin','Email');
If(Email @<> '');
S_Run='cmd /c '|ScriptDir|'\SendMail.vbs '| MailServer |' 25 '|Email|' '|Email|' "TM1 chore alert" "'|vBody|'"';
ExecuteCommand(S_Run,0);
Endif;
Endif;

2.1. adjust the LogDir, MailServer and ScriptDir values to your local settings

3. insert this monitor process in chore
This monitor process needs to be placed after every process that you would like to monitor.

How does it work?
Every process, after execution, returns a global variable "ProcessReturnCode", and that variable can be read by a process running right after in a chore.
The above process checks for that return code and pipes it to the mail script if it happens to be different from the normal exit code.

If you have a lot of processes in your chore, you will probably prefer to use the ExecuteProcess command and the check return code over a loop. That method is explained here.


monitoring chores by email part 2

Following up on monitoring chores by email, we will take a slightly different approach this time.
We use a "metaprocess" to execute all the processes listed in the original chore, check their return status and eventually act on it.
This allows for maximum flexibility as you can get that controlling process to react differently to any exit status of any process.

1. create process ProcessCheck
--- Data Source tab
choose ASCII, Data Source Name points to an already existing chore file, for example called Daily Update.cho
monitor data source

--- Variables tab
Variables tab has to be that way:
monitor variables

--- Advanced/Data tab

#mind that future TM1 versions might use a different format for .cho files and that might break this script
If(Tag @= '6');

MailServer = 'mail.myserver.com';
LogDir = '\\server\f$\TM1Data\myTM1\Log';

#get the process names from the deactivated chore
Process=Measure;

NumericGlobalVariable( 'ProcessReturnCode');
StringGlobalVariable('Status');

ErrorCode = ExecuteProcess(Process);

If(ErrorCode <> ProcessExitNormal());

If(ProcessReturnCode = ProcessExitByChoreQuit());
Status = 'Exit by ChoreQuit';
#Honour the chore flow so stop here and quit too
ChoreQuit;
Endif;
If(ProcessReturnCode = ProcessExitMinorError());
Status = 'Exit with Minor Error';
Endif;
If(ProcessReturnCode = ProcessExitByQuit());
Status = 'Exit by Quit';
Endif;
If(ProcessReturnCode = ProcessExitWithMessage());
Status = 'Exit with Message';
Endif;
If(ProcessReturnCode = ProcessExitSeriousError());
Status = 'Exit with Serious Error';
Endif;
If(ProcessReturnCode = ProcessExitOnInit());
Status = 'Exit on Init';
Endif;
If(ProcessReturnCode = ProcessExitByBreak());
Status = 'Exit by Break';
Endif;

vbody=Process|' failed: '|Status|'. Check details in '|LogDir;
Email = CellGetS('}ClientProperties','Admin','Email');
If(Email @<> '');
S_Run='cmd /c F:\TM1Data\CDOMail.vbs '| MailServer |' 25 '|Email|' '|Email|' "TM1 chore alert" "'|vBody|'"';
ExecuteCommand(S_Run,0);
Endif;
Endif;

Endif;

The code only differs from the first method when the process returns a ChoreQuit exit. Because we will be running the chore Daily Update from another chore, the ChoreQuit will not apply to the later, so we need to specify it explicitly to respect the flow and stop at the same point.

2. create chore ProcessCheck
just add the process above and set it to the same frequency and time as the Daily Update chore that you want to monitor

3. deactivate Daily Update
since the ProcessCheck chore will run the Daily Update chore there is no need to execute Daily Update another time


monitoring users logins

a quick way to monitor users login/logout on your system is to log the STATUS value (i.e. ACTIVE or blank) from the }ClientProperties cube.

View->Display Control Objects
Cubes -rightclick- Security Assignments
browse down to the }ClientProperties cube and make sure the Logging box is checked
tm1server -rightclick- View Transaction Log
Select Cubes: }ClientProperties

All the transactions are stored in the tm1s.log file, however if you are on a TM1 version prior to version 9.1 and hosted on a Windows server, the file will be locked.
A "Save Data" will close the log file and add a timestamp to its name, so you can start playing with it.

/!\ This trick does not work in TM1 9.1SP3 as it does not update the STATUS value.


Oops I did it again!

OH NOOOEES! A luser just ran that hazardous process or spreading on the production server and as a result trashed loads of data on your beloved server.
You cannot afford to take the server down to get yesterday's backup and they need the data now...
Fear not, the transaction log is here to save the day.

.in server explorer, right click on server->View Transaction Log
.narrow the query as much as you can to the time/client/cube/measures that you are after
/!\ Mind the date is in north-american format mm/dd/yyyy
.Edit->Select All
.Edit->Back Out will rollback the selected entries

Alternatively, you could get the last backup of the corresponding .cub of the "damaged" cube
.in server explorer: right-click->unload cube
.overwrite the .cub with the backed up .cub
.reload the cube from server explorer by opening any view from it


Out of Memory

You will get the dreaded message "Out of Memory" if your TM1 server reaches beyond 2 GB of RAM.
On top of adding more RAM, you also need to add the /3GB flag in C:\boot.ini to extend the available space of the TM1 server from 2 to 3 GB, if you ever need more then you will have to look for a 64bit server.

C:\boot.ini before:

[boot loader]
timeout=10
default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS
[operating systems]
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows 2000 Advanced Server" /fastdetect
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="restore mode" /safeboot:dsrepair /sos

C:\boot.ini after:

[boot loader]
timeout=10
default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS
[operating systems]
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows 2000 Advanced Server" /fastdetect /3GB
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="restore mode" /safeboot:dsrepair /sos

This trick will work only for Windows 2000 Advanced or Datacenter Server and Windows 2003 Enterprise or Datacenter Edition.

It is also recommended that you restart your TM1 service daily in order to free up the RAM used from TM1 operations during the day.

http://support.microsoft.com/default.aspx?scid=kb;en-us;291988



After a RAM upgrade to 3 GB, you might still get the "Out of Memory" message when you are importing a lot of data at once although your server itself is actually using "only" 2.5 GB.
A way to circumvent that limit is to breakdown the import of data:
use SaveDataAll to commit the changes on disk after some import
then use CubeUnload(cube) of the cube that you just updated. That will free some space that can be used for importing further data for other cubes and that space will be used back to load that cube later once someone opens a view from that cube.


pushing data from an iSeries to TM1

TM1 chore scheduling is frequency based, i.e. it will run and try to pull data after a predefined period of time regardless of the availability of the data at the source. Unfortunately it can be a hit or miss and it can even become a maintenance issue when Daylight Saving Time come into play.
Ideally you would need to import or get the data pushed to TM1 as soon as it is available. The following article shows one way of achieving that goal with an iSeries as the source...

prerequesites on the TM1 server:
.Mike Cowie's TIExecute or download it from the attachment below
.iSeries Client Access components (iSeries Access for Windows Remote Command service)

Procedure to follow
1. drop TM1ChoreExecute, TM1ProcessExecute, associated files and the 32bit TM1 API dlls in a folder on the TM1 server (see readme in the zip for details)
2. start iSeries Access for Windows Remote Command on the TM1 server, set as automatic and select a user that can execute the TM1ChoreExecute
3. in client access setup: set remote incoming command "run as system" + "generic security"
3. on your iSeries, add the following command after all your queries/extracts:
RUNRMTCMD CMD('start D:\path\to\TM1ChoreExecute AdminServer TM1Server UserID Password ChoreName') RMTLOCNAME('10.xx.x.xx' *IP) WAITTIME(10)
10.xx.x.xx IP of your TM1 server
D:\path\to path where the TM1ChoreExecute is stored
AdminServer name of machine running the Admin Server service on your network.
TM1Server visible name of your TM1 Server (not the machine name of the machine running TM1.
UserID TM1 user ID with credentials to execute the chore.
Password TM1 user ID's password to the TM1 Server.
ChoreName name of requested chore to be run to load data from the iSeries.

You should consider setting a user/pass to restrict access to the iSeries remote service and avoid abuse.
But ideally an equivalent of TM1ChoreExecute should be compiled and executed directly from the iSeries.


store any files in the Applications folder

The Applications folder is great but limited to views and xls files, well not anymore ;).
The following explains how to make available just any file in your Applications folders.

1. create a file called myfile.blob in }Applications\ on your TM1 server
it should contain the following 3 lines:
ENTRYNAME=tutorial.pdf
ENTRYTYPE=blob
ENTRYREFERENCE=TM!:///blob/public/.\}Externals\tutorial.pdf

2. place your file, tutorial.pdf in this case, in }Externals or whatever path you defined in ENTRYREFERENCE

3. restart your TM1 service

ENTRYNAME is the name that will be displayed in Server Explorer.
ENTRYREFERENCE is the path to your actual file. The file does not need to be in the folder }Externals but the server must be able to access it

/!\ avoid large files, there is no sign to tell you to wait while loading, impatient users might click several times on the file and unvoluntarily flood the server or themselves.
/!\ add the extension in ENTRYNAME to avoid confusion, although it is not a .xls file, it will be displayed with an XLS icon.


TM1 services on the command line

removing a TM1 service
in a DOS shell:
go to the \bin folder where TM1 is installed then:
tm1sd -remove "TM1 Service"
where "TM1 Service is the name of an existing TM1 service
or: sc delete "TM1 Service"

removing the TM1 Admin services
sc delete tm1admsdx64
sc delete TM1ExcelService

installing a TM1 service
in a DOS shell:
go to the \bin folder where TM1 is installed then:
tm1sd -install "TM1 Service" DIRCONFIG
where DIRCONFIG is the absolute path where the tm1s.cfg of your TM1 Service is stored

manually starting a TM1 service
from a DOS shell in the \bin folder of the TM1 installation:
tm1s -z DIRCONFIG

remotely start a TM1 service
netsvc /start \\TM1server "TM1 service"
sc \\TM1server start "TM1 service"

remotely stop a TM1 service
netsvc /stop \\TM1server "TM1 service"
sc \\TM1server stop "TM1 service"

more details on netsvc and sc


TM1 sudoku

Beyond the purely ludic and mathematical aspects of sudoku, this code demonstrates how to set up dimensions, cubes, views, cell formating, security at elements and cells levels all through Turbo Integrator in just one process.

Thanks to this application, you can prove your TM1 ROI: none of your company employees will ever need to shell out £1 for their daily sudoku from the Times.
Alternatively, you could move your users to a "probation" group before they start their shift. It is only by completing successfully the sudoku that the users will be moved back to their original group.
This way you can insure your company employees are mentally fit to carry out changes to the budget, especially after last evening ethylic abuses down the pub.

Of course it exists many sudoku available for Excel, this is one is to be played primarily from the cube viewer, but you could also slice the view and play it from Excel too.

How to install:
.Save the processes in your TM1 folder and reload your server or copy the code directly to new turbo integrator processes.
.Execute "Create Sudoku". That creates the cube, default view and new puzzle in less than a second.
sudokusudoku
The user can input numbers in the "input" grid only where there are zeroes. The "solution" grid cannot be read by default.
.Execute "Check Sudoku" to verify your input grid matches the solution.
If you are logged under an admin account, you will not see any cells locked, you need to be under the group defined in the process to see the cells properly locked.

You might want to change the default group allowed to play and the number of initial pairs that are blanked in order to increase difficulty.

The algorithm provided to generate the sudoku could be quickly modified to solve by brute force any sudoku. Provided the sudoku grid is valid, it will find the solution, however some sudokus with too many empty cells will have more than one solution.

This post is published on April 1st, but I can assure you the code is not an April's fool, it works and it was tested on TM1 9.0.3.


TM1Top

realtime monitoring of your TM1 server, pretty much like the GNU top command.

tm1top

It is bundled with TM1 only from version 9.1. You might have to ask your support contact to get it or get Ben Hill's TM1Top below.

. dump the files in a folder
. edit tm1top.ini, replace myserver and myadminhost with your setup

servername=myserver
adminhost=myadminhost
refresh=5
logfile=C:\tm1top.log
logperiod=0
logappend=T

. run the tm1top.exe

Commands:
X exit
W write display to a file
H help
V verify/login to allow cancelling jobs
C cancel threads, you must first login to use that command
Keep in mind all it does is to insert a "ProcessQuit" command in the chosen thread.
Hence it will not work if the user is calculating a large view or a TI is stuck in a loop where it never reads the next data record, as the quit command is entered for the next data line rather than the next line of code. Then your only option becomes to terminate the user's connection with the server manager or API. (thanks Steve Vincent).



Ben "Kyro" Hill did a great job developing a very convenient GUI TM1Top. You can find it attached below.

TM1Top tray
(green = mostly idle, orange = user data request, red = process running/rule saving/overload)


tm1web customizer

The tm1web customizer will allow you to change the default logos and color schemes of tm1web from a graphical interface.
That is trying to make it more convenient to customize your tm1web without having to dig in the code.
It can be found here: ftp://ftp.applix.com/pub/Gruenes/TM1WebAppCustomizer.zip

However note that it is configured to work with 9SP1.


TM1Web vs TM1 Server Explorer DeathMatch

I have a strong dislike for TM1Web and here is why...

Quick Traffic Analysis comparison
On the recommended practices site from Applix, the following article TM1 Deployment Options and Network Bandwidth Considerations claims that TM1Web is more suited to low bandwidth networks.
O RLY? So I decided to give it a go with Wireshark, great network analysis tool, used to be known as Ethereal.

I do 2 runs, one with Server Explorer (direct over TCP/IP no HTTP), the other with TM1Web
The analysis takes place between a Windows XP client and Windows 2000 Advanced server hosting TM1. Both are using TM1 9.0SP2, the only customisation brought to TM1Web was to remove the top left TM1 logo so that should have only a neglectable effect on the statistics.

In each case:
.close all connections to TM1 server
.on the client host, Wireshark capture filter set to log only packets to and from the TM1 Server
Capture -> Options
set Interface to the ethernet card in use
set capture filter to that string: host "TM1 server IP"
if the TM1 server has the IP 192.168.0.10 then the capture filter must be:
host 192.168.0.10
.check the capture baseline is flat to be sure there will be no other traffic
.start logging packets just before opening the view
.open a "decent" view, 412 rows x 8 columns
.scroll through all the rows until bottom is reached
.stop logging

Results (in Wireshark, Statistics -> Summary):
978 kBytes went through the network with TM1Web
150 kBytes went through the network with server explorer/cube viewer

So much for saving bandwidth with TM1Web, it is actually consuming at least 5 times more traffic than Server Explorer.

If I get more time I will look in the packets to see why there is so much overhead with TM1Web, my initial guess is this is caused by the additional HTTP protocol layer.


This time I tried with another view, 7 dimensions, 415 rows by 9 columns
similar results:
947 kB for TM1 Web
147 kB for cube viewer

And I pushed the analysis a bit further.
Wireshark Menu: Statistics -> Protocol Hierarchytm1web protocols hierarchy
As you can see HTTP takes up only 8.7% of the total traffic, but that is already 47 kBytes just to embed data on the wire, cube viewer would have already transfered 30% of the view in the same amount of bytes!

Now let's breakdown the conversation between the client and server.
From the Wireshark menu: Statistics -> Conversation List -> TCP
The popup window now displays the TCP conversations by size, the fattest are at the bottom.
tcp conversation list

So let's see what is causing all that traffic...
Right click the last one: Apply As Filter-> Selected -> A--B
then from the Wireshark menu: Analyze -> Follow TCP Stream

tcp stream
You can now see what makes up all that traffic, and the culprit is....
OMG ALL THAT JUNK HTML CODE!
and that is sent every single time you press the little arrows to change the page on a view.

You would think TM1Web would somehow send only the actual data and leave the formatting processing to the client (AJAX?) to spare the network and boost response times, well it is just not the case.