Xtivia http://www.xtivia.com Your Trusted Technology Partner Fri, 31 Jul 2015 20:42:59 +0000 en-US hourly 1 Query Liferay’s database to find documents marked searchable http://www.xtivia.com/query-liferays-database/ http://www.xtivia.com/query-liferays-database/#comments Fri, 31 Jul 2015 18:13:52 +0000 http://www.xtivia.com/?p=7802 This post shows you how to query Liferay’s database and see which Documents and Media files are marked ‘searchable’.  For these examples, I was using SQL Server and did not validate the...

READ MORE

The post Query Liferay’s database to find documents marked searchable appeared first on Xtivia.

]]>
This post shows you how to query Liferay’s database and see which Documents and Media files are marked ‘searchable’.  For these examples, I was using SQL Server and did not validate the queries work on other databases.  They are simple enough to port if you are running Oracle, DB2, or any other major relational database platform.

Figuring out how to query Liferay’s database for this information is not straightforward.  My original expectation was for the DLFileEntry table to have an ‘indexable’ column similar to the JournalArticle table.  “No big deal”, I thought to myself, “It must be on the DLFileVersion table”.  WRONG!!?!  When I did not see the ‘indexable’ field on the DLFileVersion table, and no references in ‘typeSettings’ data, I started feeling a bit uneasy.  How the heck is this info stored?

Finding any information on this topic prompted this blog post to help you query Liferay’s database to find Documents and Media marked as searchable.  I learned that Liferay uses Expando attributes to identify which documents are marked ‘searchable’.

Basic Documents and Media Query

Here is the basic starter to help you query Liferay’s database for the current documents and media entries.


SELECT FE.fileEntryId 
     ,FE.version 
     ,FE.title 
     ,EV.[data_] AS 'searchable' 
FROM [ExpandoValue] EV 
     INNER JOIN [ExpandoColumn] EC on EV.columnId = EC.columnId 
     INNER JOIN [DLFileVersion] FV on EV.classPk = FV.fileVersionId 
     INNER JOIN [DLFileEntry] FE on FE.fileEntryId = FV.fileEntryId 
WHERE EV.classNameId = 10010 
     AND EC.name = 'searchable' 
     AND FE.version = FV.version 
ORDER BY FV.fileEntryId, FV.version

This query should be enough to get a developer going or basic reporting.

Add Group_ and Organization_ tables to the query

For installations with multiple sites or organizations where content is stored, you will need to add joins on the

[Group_]

and

[Organization_]

tables. This will at least tell you where the documents are stored so someone can go into the Control Panel and find them.


SELECT G.name
     ,O.name
     ,FE.fileEntryId 
     ,FE.version 
     ,FE.title 
     ,EV.[data_] AS 'searchable' 
FROM [ExpandoValue] EV 
     INNER JOIN [ExpandoColumn] EC on EV.columnId = EC.columnId 
     INNER JOIN [DLFileVersion] FV on EV.classPk = FV.fileVersionId 
     INNER JOIN [DLFileEntry] FE on FE.fileEntryId = FV.fileEntryId 
     INNER JOIN [Group_] G on FE.groupId = G.groupId
     LEFT OUTER JOIN [Organization_] O on O.organizationId = G.classPK
WHERE EV.classNameId = 10010 
     AND EC.name = 'searchable' 
     AND FE.version = FV.version 
ORDER BY FV.fileEntryId, FV.version

Take note that when you query Liferay’s database with the queries above, you will not get all versions of the files. These queries only return the current file version.

Query for file version history

This next query will show you how to query Liferay’s database for the full file version history.


SELECT G.name
     ,O.name
     ,FE.fileEntryId
     ,FV.version
     ,FE.title   
     ,EV.[data_] AS 'searchable'
FROM [ExpandoValue] EV
     INNER JOIN [ExpandoColumn] EC on EV.columnId = EC.columnId
     INNER JOIN [DLFileVersion] FV on EV.classPk = FV.fileVersionId
     INNER JOIN [DLFileEntry] FE on FE.fileEntryId = FV.fileEntryId
     INNER JOIN [Group_] G on FE.groupId = G.groupId
     LEFT OUTER JOIN [Organization_] O on O.organizationId = G.classPK
WHERE EV.classNameId = 10010
     AND EC.name = 'searchable'
ORDER BY FV.fileEntryId, FV.version

The post Query Liferay’s database to find documents marked searchable appeared first on Xtivia.

]]>
http://www.xtivia.com/query-liferays-database/feed/ 0
Profiling Data with SSIS http://www.xtivia.com/profiling-data-ssis/ http://www.xtivia.com/profiling-data-ssis/#comments Mon, 20 Jul 2015 00:51:31 +0000 http://www.xtivia.com/?p=9362 SQL Server Integration Services offers a useful tool to analyze data before you bring it into your Data Warehouse.  The Profile Task will store the analysis in an XML file,...

READ MORE

The post Profiling Data with SSIS appeared first on Xtivia.

]]>
SQL Server Integration Services offers a useful tool to analyze data before you bring it into your Data Warehouse.  The Profile Task will store the analysis in an XML file, which you can view using the Data Profile Viewer.  Before we review how to use the Profile Task, let’s take a look at the eight types of profiles that can be generated by this control.

  • Candidate Key Profile Request
    • Use this profile to identify the columns that make up a key in your data
  • Column Length Distribution Profile Request
    • This profile reports the distinct lengths of string values in selected columns and the percentage of rows in the table that each length represents. Use this profile to identify invalid data, for example a United States state code column with more than two characters.
  • Column Null Ratio Profile Request
    • As the name implies, this profile will report the percentage of null values in selected columns. Use this profile to identify unexpectedly high ratios of null values.
  • Column Pattern Profile Request
    • Reports a set of regular expressions that cover the specified percentage of values in a string column. Use this profile to identify invalid strings in your data, such as Zip Code/Postal Code that do not fit a specific format.
  • Column Statistics Profile Request
    • Reports statistics such as minimum, maximum, average and standard deviation for numeric columns, and minimum and maximum for datetime columns. Use this profile to look for out of range values, like a column of historical dates with a maximum date in the future.
  • Column Value Distribution Profile Request
    • This profile reports all the distinct values in selected columns and the percentage of rows in the table that each value represents. It can also report values that represent more than a specified percentage in the table.  This profile can help you identify problems in your data such as an incorrect number of distinct values in a column.  For example, it can tell you if you have more than 50 distinct values in a column that contains United States state codes.
  • Functional Dependency Profile Request
    • The Functional Dependency Profile reports the extent to which the values in one column (the dependent column) depend on the values in another column or set of columns (the determinant column). This profile can also help you identify problems in your data, such as values that are not valid. For example, you profile the dependency between a column of United States Zip Codes and a column of states in the United States. The same Zip Code should always have the same state, but the profile discovers violations of this dependency.
  • Value Inclusion Profile Request
    • The Value Inclusion Profile computes the overlap in the values between two columns or sets of columns. This profile can also determine whether a column or set of columns is appropriate to serve as a foreign key between the selected tables. This profile can also help you identify problems in your data such as values that are not valid. For example, you profile the ProductID column of a Sales table and discover that the column contains values that are not found in the ProductID column of the Products table.

 

To use the Data Profile task follow these steps.

 

1) Start an Integration Services project and copy the Data Profiling Task to the Control Panel

Data Profiling Task

Adding a Data Profiling Task

 

2) Double click on the Data Profiling Task, the following dialog will appear where you can set the destination for the XML file that will hold the profile analysis.

Configure Data Profiling Task

Data Profiling Task Editor

 

3)  Select DestinationType and choose FileConnection, then select Destination and choose <New File connection…>, the following dialog will appear.

Creating Profile Output

Profile Task Results File Location

 

4) Select Usage type, and choose Create File, browse to the location you want to hold the Data Profile XML file and give it a name.  The dialog will look something like the following.

Profile results location

Choose the type and location of the Profile results

 

5) Click OK and then select Profile Requests in the Data Profiling Task Editor.  You will see a list of all the Profile types you can generate.  We will generate just the Functional Profile as shown below, select Functional Dependency Profile Request from the View drop down option.

Profile Requests

Profile Request Selection

 

6) We want to run this profile on the ProductSubcategoryID column against all other columns in the Product table.  Click on the row in the pane below the View option to display the entries in the Request Properties pane.

Functional Relationship Request

Profile Request – Functional Relationship

 

 

 

7) The Determinant column is the column that determines the values of the other columns.  For example if we had a table of US State codes and State names, then the code column is the determinant column and the name column is the dependent column.  We need to identify what those columns are for the Product table.  In the Request Properties pane select the DeterminantColumns option and in the drop down dialog, remove the wild card selections and select the ProductSubCategoryID column.

Determinant Column Selection

Functional Request Determinant Column

 

8) Leave the asterisk in the DependentColumn option, this wild card character indicates that ProductSubcategoryID should be compared against every column in the Product table.  Please note the FDStrengthThreshold option.  This option allow us to specify the threshold which determines when a Functional Relationship exists between any two columns.  In this example we are saying that if less than 95% of the rows in the table don’t have the same values in two columns, then those columns do not have a Functional Relationship and won’t be reported in this profile.

Profile Requests

Profile Request Properties

 

9) Click the OK button and then run the package.  When the package completes successfully, you will have an XML file in the location you specified in Step 4.  You can use the Data Profile Viewer to view the XML file.  The viewer application can be found in the Integration Services folder under the Microsoft SQL Server 2008 R2 folder.

Profile Viewer Location

Profile Viewer

 

10) The Data Profile Viewer will open a Dialog that will allow you to specify the path to the XML file you created.  If you don’t see your XML file, make sure you specified the XML extension when you entered the name in Step 4, if you didn’t just add the extension to the file name.

Profile Location

Data Profile Location

 

11) Click on the Function Dependency Profiles option under the Product table.  The Functional Dependency Profiles pane shows our chosen Determinant column, ProductSubcategoryID, and the other columns from the table in which at least 95% of the rows had the same pair of values in the Determinant and Dependent columns.

Functional Dependency Results

Functional Dependency Profile Results

 

12) With the Style Dependent column selected in the Functional Dependency Profiles pane we see in the Functional Dependency Violations pane there are five sets of ProductSubcategoryIDs that have values in the Dependent Style column that deviate from the majority of values for the ProductSubcategoryID.  Double click on ProductSubcategoryID 14 in the Functional Dependency Violations pane.  The Functional Dependency Profiles pane now shows the rows that support the functional dependency and those that are in violation for ProductSubcategoryID 14.

Functional Dependency Support and Violation

Functional Dependency Results with Support and Violation Rows

 

13) In the Functional Dependency Violations grid, we can see that for a ProductSubcategoryID of 14 there is a Support Percentage of 84.8485 percent for the Style column.  This means that there were 33 rows with a ProductSubcategoryID of 14, and of those rows 28 of them had a U in the Style column while the remaining 5 had some other value.  To see the detail behind this, click on the row in the Functional Dependency Violations grid, and the Supported Rows and Violation Rows will be shown in two grids below.  Now you can see exactly what the values of the Style column were for the Violation rows, in this case they were all W.

Determining whether or not this identifies a problem in the data is up to someone with business knowledge of the Product table.  The purpose of the Data Profile Task is to alert us to potential issues and guide us on where to look in the data.

This and the other seven profile requests offered in the Data Profiling Task, can ease your burden when it comes time to profile new unknown data sets.

The post Profiling Data with SSIS appeared first on Xtivia.

]]>
http://www.xtivia.com/profiling-data-ssis/feed/ 0
Managing DB2 Transaction Log Files http://www.xtivia.com/managing-db2-transaction-log-files/ http://www.xtivia.com/managing-db2-transaction-log-files/#comments Thu, 16 Jul 2015 12:00:30 +0000 http://www.xtivia.com/?p=8209 Logging method There are two methods of logging that DB2 supports: Circular and Archive. Other RDBMSes have similar modes. Circular The default that DB2 uses if you don’t change anything...

READ MORE

The post Managing DB2 Transaction Log Files appeared first on Xtivia.

]]>
Logging method

There are two methods of logging that DB2 supports: Circular and Archive. Other RDBMSes have similar modes.

Circular

The default that DB2 uses if you don’t change anything is Circular logging. Circular logging is more often appropriate for databases where you don’t care about the data (I have seen it used for databases supporting Tivoli monitoring and other vendors) or for Data Warehousing and Decision Support databases where you have extremely well defined data loading processes that can easily be re-done on demand. You must also be willing to take regular outages for database backups because Circular logging does not allow you to take online backups. Circular logging also does not allow you to rollforward or back through transaction logs to reach a point in time – any restores are ONLY to the time a backup was taken.

On most new builds, I move away from circular logging. I just don’t find it appropriate for most databases, especially transaction processing databases where requirements often include very high availability and the ability to recover from all kinds of disasters with no data loss.

Archive

So why isn’t archive logging the default? Well, it requires proper management of transaction log files. Which can really get you in trouble if you don’t know what you’re doing. If you compress or delete an active transaction log, you will crash your database and have to restore from a backup. I’ve seen it happen, and it’s not fun. The highest frequency of OS level backups you’re willing to do should be applied to the directories holding transaction log files.

I ensure that my archive logs are always on a separate path from the active ones so I, and whoever gets paged out when a filesystem is filling up, can easily see which is which. They should preferably be on a separate filesystem or archived to a location other than the database server itself. TSM, NetBackup, NetWorker, and other third-party tools have interfaces for this.

Scripts work well to manage trasaction log files. A backup script can be a good place to do this. How long you keep them depends on your restore requirements and your overall backup/restore strategy.

To compress archived transaction logs, you can either use the LOGARCHCOMPR1 database configuration parameter, or use a simple cron job to find files in the archive log path older than a certain time frame (1 day or 3 days is most common) and compress them. A nice safe way to delete logs is the prune logs command.

This is one of the areas where it is critical for DBAs to have an excruciatingly high level of attention to detail.

Logging Settings

Ok, ready for the most complicated part?

All the settings discussed here are in the db cfg.

LOGRETAIN

The LOGRETAIN parameter was discontinued in DB2 10.1. The LOG_RETAIN_STATUS parameter can be used to understand what the other settings do, but you cannot set it directly.

To look at the value of this, use this command and look at the first two rows of output:

$ db2 get db cfg for sample |grep -i log
 Log retain for recovery status                          = NO
 User exit for logging status                            = NO

If ‘Log retain for recovery status’ is set to ‘NO’, then you have circular logging. If it is set to ‘Recovery’ then you have archive logging. To enable archive logging, you simply update LOGARCHMETH1 to a valid value.

LOGARCHMETH1

This parameter specifies the separate path for your archive logs to be sent to. It can be a location on DISK, TSM or other VENDOR library.

http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.config.doc/doc/r0011448.html

WTH is this USEREXIT thing?

I undoubtedly have some newer DBAs wondering about this. The LOGARCHMETH1 parameter and others that dictate the location of archive logs was only introduced in DB2 8 (or was it 7?). Before that, we had these nasty things called userexit programs that we had to locate C compilers to compile and be aware of the location of the uncompiled versions to make changes if needed. And the compiled file had to be in the right place with the right permissions. Really, I hated working with them. But the functionality is still in DB2 to use them. I imagine they could do things you can’t do natively, but the parameters are so good that it’d be a rare situation that you need them.

LOGFILSIZ

This is the size of each transaction log file. Generally my default for OLTP databases is 10000, but I’ve gone higher – it’s not unusual to go up to 40,000. Other values may be valid, depending on circumstances

LOGPRIMARY

This determines the number of log files of the size LOGFILSZ that compose the database’s active log files. These are all created on database activation (which happens on first connection), so you don’t want to go too large. But you do want to generally have the space here to handle your active logs.

LOGSECOND

This determines the number of additional active logs that can be allocated as needed. LOGPRIMARY + LOGSECOND cannot exceed 255. The nice thing about LOGSECOND is that these are not allocated on database activation, but only as needed. You can keep database activation faster by having a lower number for LOGPRIMARY and a higher number for LOGSECOND, if database activation time matters. The other awesome thing here is that LOGSECOND can be increased online – one of the few logging parameters that can be. I usually start with 50, but increase if there’s a specific need for more. Remember, these should not be used on an ongoing basis – just to handle spikes.

All the Others

There are all kinds of nifty things you can do with logging. Infinite logging, mirrored logging, logging to a raw device, etc. I’m not going to cover all the logging parameters there are in this post.

Potential issues

Deleting or Compressing an Active Log File

The best case if you delete or compress an active log file is that DB2 is able to recreate it. This may affect your ability to take online backups. The worst (and more likely) case is that your database ceases functioning and you have to restore from backup. Keep your active and archive logs in separate directories to help prevent this, and educate anyone who might try to alleviate a filesystem full issue – System Adminstrators or Ops or even developers. If you do get an error on an online backup referencing the inability to include a log file, take an offline backup just as soon as you can – you will be unable to take online backups until you do.

Filling up a Filesystem Due to Not Managing Log Files

If your archive log filesystem is separate and fills up, it doesn’t hurt anything. If the filesystem your active log path is on fills up, your database will be inaccessible until you clear up the filesystem full. The moment the filesystem is no longer full, the database will function, so there is no need to restore. I recommend filesystem-level monitoring for any filesystems involved in transaction logging.

Deleting Too Many Log Files and Impacting Recovery

If you’re on anything before DB2 9.5, make absolutely sure that you use the “include logs” keyword on the backup command. If you don’t, you may end up with a backup that is completely useless, because you MUST have at least one log file to restore from an online backup. When you delete log files, keep in mind your backup/recovery strategy. There’s very little worse than really needing to restore but being unable to do so because you’re missing a file. I recommend backing up your transaction logs to tape or through other OS level methods as frequently as you can.

Deleting Recent Files and Impacting HADR

Sometimes HADR needs to access archive log files – especially if HADR is behind and needs to catch up. If you run into this situation, you have to re-set-up HADR using a database restore. If you’re using HADR, it is important to monitor HADR so you can catch failures as soon as possible and reduce the need for archive logs.

Log Files Too Small

Tuning the size of your log files may be a topic for another post, but I’ll cover the highlights. Large deletes are the most likely to chew through everything you’ve got. The best solution is to break up large units of work into smaller pieces, especially deletes. Where that’s not possible, you’ll need to increase any of LOGFILSZ, LOGPRIMARY, or LOGSECOND. Only LOGSECOND can be changed without recycling the database.

Log File Saturation

This one confuses the heck out of new DBAs. You get what looks like a log file full, yet the disk is not full and a snapshot says there’s plenty of log space available. The problem here is that with archive logging, log files and each spot in those log files must be used sequentially – even if there are things that have already been committed. Normally the database is rolling through the logs, with the same number of files active at once, but constantly changing which files.

Sometimes an old connection is sitting out there hanging on to a page in the log file with an uncommitted unit of work. Then the connection becomes idle and stays that way, sometimes for days. Then DB2 gets to the point where it has to open another log file, and it can’t because that would be more than it is allowed to allocate. So it throws an error that looks pretty similar to log file full. In that case, you must force off the old idle connection. Details are written to the diag log, and you can also use a database snapshot to get the id of the connection holding the oldest log file.

Often, this is a developer’s connection. Many applications are re-using connections, and manage them fairly well. If a database has these kinds of issues, I like to have a db2 governor running that forces off connections that are IDLE for more than a specified time. For Websphere Commerce databases, 4 hours works well because they are constantly using the connections. Other applications may have different thresholds.

The post Managing DB2 Transaction Log Files appeared first on Xtivia.

]]>
http://www.xtivia.com/managing-db2-transaction-log-files/feed/ 0
Stopping DB2 Processes for a DB2 Fix Pack Upgrade http://www.xtivia.com/stopping-db2-processes-db2-fix-pack-upgrade/ http://www.xtivia.com/stopping-db2-processes-db2-fix-pack-upgrade/#comments Tue, 07 Jul 2015 11:00:08 +0000 http://www.xtivia.com/?p=7902 When upgrading DB2 fix packs, you need to make sure that all DB2 related processes have been shut down clean. In normal circumstances, you would: db2stop (if that does not...

READ MORE

The post Stopping DB2 Processes for a DB2 Fix Pack Upgrade appeared first on Xtivia.

]]>
When upgrading DB2 fix packs, you need to make sure that all DB2 related processes have been shut down clean. In normal circumstances, you would:

  1. db2stop (if that does not work, try db2stop force)
  2. db2licd –end
  3. ipclean

Even after following the steps, you may still see some processes that are running. One of them looks like this:

$ ps -ef | grep db2fm
fmc:2:respawn:/usr/opt/db2_08_01/bin/db2fmcd #DB2 Fault Monitor Coordinator

Upgrading the fix pack is not possible in most cases, if DB2 finds some processes still running. Trying to stop or kill this process is quite a task, and varies by operating system. Instead, try the force option with the installFixPack command.

./installFixPack -f db2lib -b /db2/code/V101

You should capture the path used after step 2 (above) as part of pre-work before the upgrade, using:

db2level

The output looks something like this:

DB21085I  This instance or install (instance name, where applicable: "db2xxx1")
uses "64" bits and DB2 code release "SQL10014" with level identifier
"0205010E".
Informational tokens are "DB2 v10.1.0.4", "s140509", "IP23584", and Fix Pack
"4".
Product is installed at "/db2/code/v101".

The post Stopping DB2 Processes for a DB2 Fix Pack Upgrade appeared first on Xtivia.

]]>
http://www.xtivia.com/stopping-db2-processes-db2-fix-pack-upgrade/feed/ 1
Move data from a custom module when converting a lead in SugarCRM http://www.xtivia.com/move-data-from-a-custom-module-when-converting-a-lead-in-sugarcrm/ http://www.xtivia.com/move-data-from-a-custom-module-when-converting-a-lead-in-sugarcrm/#comments Mon, 06 Jul 2015 16:02:49 +0000 http://www.xtivia.com/?p=8323 In SugarCRM a Lead Conversion will not handle data from a custom module. Converting a lead that has data associated to it in a custom module will cause the data...

READ MORE

The post Move data from a custom module when converting a lead in SugarCRM appeared first on Xtivia.

]]>
In SugarCRM a Lead Conversion will not handle data from a custom module. Converting a lead that has data associated to it in a custom module will cause the data to be lost.

In my personal opinion this should be done automatically as long as you add this module on to the Lead Conversion screen. Unfortunately for now Sugar considers this as a feature request for a future version of SugarCRM.

Thankfully, there is a workaround solution that we can implement in code. This approach is upgrade safe so it can also be installed on a Sugar OnDemand environment.

In this example I have a custom module called Criteria which is 1-M with the Lead module. The Criteria module is used to identify the qualification of a particular lead. Once all the criteria have been met, a lead will be converted to an Account, Contact and in this case also an Opportunity. However out of the box, the data from the Criteria module would not transfer over.

Here is how we do it…..

First let’s create a manifest for the install-able package and we will use Module Loader to install this package.

MANIFEST.PHP


<?php 
$manifest = array(
 array('CE','PRO','CORP','ENT','ULT'),
	'acceptable_sugar_versions' => array(
	'exact_matches' =>
		array(
				0 => '7.5.2.2',
		)
	),

	'author' => 'Xtivia - Doddy Amijaya',
	'description' => 'Install Criteria logic hook',
	'icon' => '',
	'is_uninstallable' => true,
	'name' => 'Criteria Logic Hook',
	'published_date' => '2015-06-30 2015 20:45:04',
	'type' => 'module',
	'version' => '1.0',
 );

$installdefs = array(
	'id' => 'package_201506301159',
		'copy' => array(
				0 => array(
					'from' => '/Files/custom/modules/Leads/criteria_hook.php',
					'to' => 'custom/modules/Leads/criteria_hook.php',
				),
			),

	'logic_hooks' => array(
				array(
				'module' => 'Leads',
				'hook' => 'before_save',
				'order' => 99,
				'description' => 'Criteria Logic Hook',
				'file' => 'custom/modules/Leads/criteria_hook.php',
				'class' => 'Criteria',
				'function' => 'onLeadSave',
				),
			),
);

?>

The manifest and the installable logic hooks are done, now we create the code for the logic hook.

CRITERIA_HOOK.PHP


<?php
    if (!defined('sugarEntry') || !sugarEntry) die('Not A Valid Entry Point');
    
    class Criteria
    {
       function onLeadSave($bean, $event, $arguments)
       {
	   get_linked_beans('criteria_leads','Crit_Criteria');
		 $lead_id = $bean->id;
		 $contactid = $bean->contact_id;
		 $opportunityid = $bean->opportunity_id;
		 $accountid = $bean->account_id;
		 
                 $criteriAarray = array();
		 foreach($allcriteria as $criteria)
		 {
		    $criteriAarray[] = $criteria->id;
		 }
		
		if($bean->converted == 1)
		{
		   if($contactid)
			{
			   $contact = new Contact();
			   $contact->retrieve($contactid);
			   $contact->load_relationship('criteria_contacts');
			   
			   foreach($criteriAarray as $c)
			   {		
				  $contact ->criteria_contacts->add($q);
			   }
			}

			if($accountid)
			{
			   $account = new Account();
			   $account->retrieve($accountid);
			   $account->load_relationship('criteria_accounts');

			   foreach($criteriAarray as $c)
			   {
				  $account ->criteria_accounts->add($q);
			   }
			}
			
			if($opportunityid)
			{
			   $opp = new Opportunity();
			   $opp->retrieve($opportunityid);
			   $opp->load_relationship('criteria_opportunities');

			   foreach($criteriAarray as $c)
			   {
				  $opp ->criteria_opportunities->add($q);
			   }
			}
		}
	}
}
?>

When this is done, we can zip up the two files into a zip file and install it using the module loader.

Hope this saves you some time! If you have any questions, please call the XTVIA CRM support line at 1-877-777-9779.

The post Move data from a custom module when converting a lead in SugarCRM appeared first on Xtivia.

]]>
http://www.xtivia.com/move-data-from-a-custom-module-when-converting-a-lead-in-sugarcrm/feed/ 0
Accessing InforCRM control values from Javascript http://www.xtivia.com/accessing-inforcrm-control-values-javascript/ http://www.xtivia.com/accessing-inforcrm-control-values-javascript/#comments Wed, 01 Jul 2015 16:20:08 +0000 http://www.xtivia.com/?p=8058 Often when one needs to access the value set in an HTML control from the client side script it is tempting to reach into the DOM object and retrieve it...

READ MORE

The post Accessing InforCRM control values from Javascript appeared first on Xtivia.

]]>
Often when one needs to access the value set in an HTML control from the client side script it is tempting to reach into the DOM object and retrieve it directly, using something like this for example:

var c = document.getElementById('TabControl_element_AccountOpportunities_element_view_AccountOpportunities_AccountOpportunities_InactiveTotal_InputCurrency_CurrencyTextBox');
// this will retrieve the value as a string - still needs to be parsed as a number!
var value = c.value;

Then to set the value something like:

// need to implement "formatValue" somehow!
c.value = formatValue(value);

The hard-coded id is not great of course but the main problem with the approach in my opinion is the fact that you’ll bypass the localization logic and have to parse / format the numbers yourself. Another potential issue is that because the logic of the “dijit” (the client-side widget used to render and control the currency textbox) is bypassed it will cause its internal state to be inconsistent. Often it is more reliable to access the dijit instead, using code such as:

var c = dijit.byId('TabControl_element_AccountOpportunities_element_view_AccountOpportunities_AccountOpportunities_InactiveTotal_InputCurrency_CurrencyTextBox');
// this will retrieve the "parsed" value as a number
var value = c.get('value');

And to set it:

c.set('value', 3000);

If you have some good examples of accessing the controls from client side script please share them in the comments!

The post Accessing InforCRM control values from Javascript appeared first on Xtivia.

]]>
http://www.xtivia.com/accessing-inforcrm-control-values-javascript/feed/ 0
Before you restore! http://www.xtivia.com/restore/ http://www.xtivia.com/restore/#comments Tue, 30 Jun 2015 13:00:10 +0000 http://www.xtivia.com/?p=7866 There are several times that you may have to restore from a production box to a beta or a QA environment as a different user. In that scenario, the restore...

READ MORE

The post Before you restore! appeared first on Xtivia.

]]>
There are several times that you may have to restore from a production box to a beta or a QA environment as a different user. In that scenario, the restore commands will work just fine but you will receive authorization errors even when the id you used to restore has SYSADM.

Cause:

Starting DB2 V9.7, SYSADM no longer has implicit DBADM privileges due to a change in security policies, You may see SQL errors like SQL0551N, SQL0552N or SQL3020N

Resolution:

Set the DB2_RESTORE_GRANT_ADMIN_AUTHORITIES registry variable BEFORE performing the restore. An instance bounce is required.

db2stop
db2set DB2_RESTORE_GRANT_ADMIN_AUTHORITIES=ON
db2set -all | grep -i db2_restore_grant
[i] DB2_RESTORE_GRANT_ADMIN_AUTHORITIES=ON
db2start

Once you set this variable, SECADM, DBADM, DATAACCESS, and ACCESSCTRL authorities are granted to the user that issues the restore operation.

The post Before you restore! appeared first on Xtivia.

]]>
http://www.xtivia.com/restore/feed/ 0
Unspecified Error logging into Infor CRM / SalesLogix http://www.xtivia.com/unspecified-error-logging-into-infor-crm-saleslogix/ http://www.xtivia.com/unspecified-error-logging-into-infor-crm-saleslogix/#comments Thu, 25 Jun 2015 00:25:22 +0000 http://www.xtivia.com/?p=7883 If you’ve worked with Infor CRM / SalesLogix for a while, you’ve probably run into the issue where you enter your username and password (correctly!) and all you get is...

READ MORE

The post Unspecified Error logging into Infor CRM / SalesLogix appeared first on Xtivia.

]]>
If you’ve worked with Infor CRM / SalesLogix for a while, you’ve probably run into the issue where you enter your username and password (correctly!) and all you get is an Unspecified error! Very descriptive, isn’t it?

Cause: Essentially what the error means is that the application is unable to connect successfully with the SQL server database. There are several reasons for this, hence many possible fixes.

 

1) Correct Native Client Installed – the most common issue (and this affects Windows Client users or any web user who has any of the Windows based tools installed) is that the correct SQL Native Client is not installed.

The SQL Native client is like a “driver”, you can install multiple versions on the same machine (for example SQL 2005 Native Client, SQL 2008 R2 Native Client and SQL 2012 Native Client) without any adverse effects.

However you do need to verify you have the CORRECT one installed. The correct one is the one being used on the Infor CRM / SalesLogix server in the Saleslogix Connection Manager. In the example below you’ll notice that the EVAL81 database is using SQLNCLI10.1

ConnectionManager

 

Here’s how the various versions of SQL Native client show up as in the Data Link Properties:

NativeClients

 

Version Number Product Version
9.x SQL 2005
10.x SQL 2008 (or 2008 R2)
11.x SQL 2012
12.x SQL 2014

 

 

 

So based on the above, this tells us that the database connection is being made using a SQL 2008 (R2) Native client and that’s the version that should be installed to ensure connectivity.

Note that the version of the SQL Native client doesn’t necessarily have to match the version of SQL Server being used, technically a SQL 2005 Native Client could be used to connect to a database on a SQL 2008 platform. However it is recommended that the same version as SQL server is used. Using the wrong version of the SQL Native client causes issues with SQL 2012 databases in particular.

 

2) Saleslogix Connection Manager – Often the Saleslogix connection manager has not been setup correctly. A quick way to test this is to try connecting from another client machine or from the server itself. Check to ensure that the connection to the database has been setup correctly and for SQL 2008 Native Client and higher, ensure that you click on the Advanced Settings and set the following:

  • Integrated Security  – Click the Reset Button
  • Persist Security Info – Set to True

 

3) SQL Server Network Access – If none of the clients can connect, you may have an issue with the SQL server itself. By default SQL Express instances do not have TCP/IP protocol enabled so they are accessible via the local machine only. Check the settings in the SQL Server Configuration Manager to ensure TCP/IP protocol is enabled for your SQL instance.

 

4) Firewall and Network Connectivity – Even though the client connects via the Saleslogix Application Server, it still needs direct access to the SQL server as well. See if you can ping the SQL server from the client machine. You can also try to Telnet to the SQL server at port 1433.

 

5) Corrupt Connection Settings – occasionally you run into a client machine that has corrupt SalesLogix Connection settings, even though they appear to be correct and pass the connection test. Its recommended that the client connection be deleted and re-created. You may also have to delete all subkeys under:

HKCU\Software\SalesLogix\ADOLogin

 

6) Reboot – When all else fails, maybe all that’s needed is a reboot! Sometimes SQL settings don’t take effect until a reboot is done.

 

Hope this brings some clarity to this very vague error message! If you need further assistance, call Xtivia’s CRM support line at 1-877-777-9779 and we’ll be happy to help you!

The post Unspecified Error logging into Infor CRM / SalesLogix appeared first on Xtivia.

]]>
http://www.xtivia.com/unspecified-error-logging-into-infor-crm-saleslogix/feed/ 0
What are Page Splits? http://www.xtivia.com/page-splits/ http://www.xtivia.com/page-splits/#comments Tue, 23 Jun 2015 16:56:13 +0000 http://www.xtivia.com/?p=7838 Are you noticing a drop in performance? Are you seeing a large percentage of index fragmentation? If your data page fill factor value set to a high number, page splits...

READ MORE

The post What are Page Splits? appeared first on Xtivia.

]]>
Are you noticing a drop in performance? Are you seeing a large percentage of index fragmentation? If your data page fill factor value set to a high number, page splits could be the culprit. We have helped many of our VDBA customers with these problems.

What are Page Splits?

Page splits occur when there is not enough free space on a data page to insert or update data. SQL Server takes the excess data and puts it on another data page. Imagine you have three  8oz jars filled to the rim with layers of alternating M&Ms colors. You need to add Green M&Ms to the layer in the middle of one of the jars. The jars cannot hold any more M&Ms so you have to get another jar. You take out half the M&Ms from the jar with the green layer and place them in the new jar. Then, you add the green M&Ms. Now you have four  8oz jars but only two are full. That costs time and resources and results in index fragmentation. This can decrease performance.

What do I need to look for to identify if page splits are the problem?

The Performance Monitor counter, “SQLServer:Access Methods\Page Splits/sec”, shows the number of page splits per second that occurs. To determine if this amount is a problem, there are several factors that need to be taken into consideration. This includes workload, workload type, table size, and the fill factor value. A large amount of batch requests consisting of mostly insert and update statements will cause a high number of page splits per second. This is due to the amount of changes that are being made to data pages when a batch is executed. Large tables will also cause a boost in the amount of page splits per second. This is because there are more data pages that need to be changed. Indexes with high fill factor values do not leave enough free space for changes so this will also increase the amount of splits.

How can I fix it?

One of the best places to start when fixing page splits per second is to set a fill factor value when the index is rebuilt. By default, SQL Server sets the fill factor to 100. This uses 100% of data page space. In the M&M example, filling up 100% of jar space reduced the amount of jars but left no space for changes. This meant there was no room to add more M&Ms later on. When the INSERT or Update statement is used, there is no room left on the data page to make those changes. When there is not enough room, new data pages are created and half of the data is added to the new data page. The best value for the fill factor varies based on the need of the client and their system. If the fill factor is set too high, page splits can occur. If the fill factor is set too low, the data is spread out too much and creates more data pages than needed. This can also affect the performance. If you have questions about finding the right fill factor value for your indexes or need assistance with SQL Server in general, reach out to us! XTIVIA and I can assist you with adding resiliency for your business.

 

The post What are Page Splits? appeared first on Xtivia.

]]>
http://www.xtivia.com/page-splits/feed/ 0
How to extract data using db2audit http://www.xtivia.com/db2audit/ http://www.xtivia.com/db2audit/#comments Tue, 23 Jun 2015 16:44:54 +0000 http://www.xtivia.com/?p=7765 One of our customers recently wanted to extract data using db2audit, but the documentation on this topic is very limited and scarce. Moreover, the commands have changed after V9.7 and...

READ MORE

The post How to extract data using db2audit appeared first on Xtivia.

]]>
One of our customers recently wanted to extract data using db2audit, but the documentation on this topic is very limited and scarce. Moreover, the commands have changed after V9.7 and finding documentation or following it could be tricky. After some testing, we were able to provide the customer with the exact syntax to use to extract data. Here’s how:

Make sure db2audit is on

$db2audit start

$db2audit flush

This forces any pending audit records to be written to the audit log. Also, the audit state is reset from “unable to log” to a state of “ready to log” if the audit facility is in an error state.

$ pwd

/home/db2inst1/sqllib/security/auditdata

$ ls -ltr

total 404204

-rw------- 1 db2inst1 db2iadm1      9122 Feb 14  2014 db2audit.instance.log.0.20140214184332

-rw-rw-rw- 1 db2inst1 db2iadm1         0 Feb 14  2014 auditlobs

-rw------- 1 db2inst1 db2iadm1 403732695 Dec  9 22:10 db2audit.instance.log.0.20141209221020

-rw------- 1 db2inst1 db2iadm1   3636219 Dec  9 23:32 db2audit.db.WCST01.log.0.20141209233216

-rw-rw-rw- 1 db2inst1 db2iadm1         0 Dec  9 23:40 audit.del

-rw------- 1 db2inst1 db2iadm1   5750894 Dec  9 23:42 db2audit.db.DBINST1.log.0

-rw------- 1 db2inst1 db2iadm1    333282 Dec  9 23:42 db2audit.instance.log.0

 

 

$ db2audit extract delasc delimiter ! category validate from files /home/db2inst1/sqllib/security/auditdata/db2audit.db.WCST01.log.0.20141209233216

 

AUD0000I  Operation succeeded.

 

$ ls -ltr

total 404204

-rw------- 1 db2inst1 db2iadm1      9122 Feb 14  2014 db2audit.instance.log.0.20140214184332

-rw-rw-rw- 1 db2inst1 db2iadm1         0 Feb 14  2014 auditlobs

-rw------- 1 db2inst1 db2iadm1 403732695 Dec  9 22:10 db2audit.instance.log.0.20141209221020

-rw------- 1 db2inst1 db2iadm1   3636219 Dec  9 23:32 db2audit.db.WCST01.log.0.20141209233216

-rw-rw-rw- 1 db2inst1 db2iadm1         0 Dec  9 23:40 audit.del

-rw-rw-rw- 1 db2inst1 db2iadm1      7295 Dec  9 23:41 validate.del

-rw------- 1 db2inst1 db2iadm1   5750894 Dec  9 23:42 db2audit.db.DBINST1.log.0

-rw------- 1 db2inst1 db2iadm1    333969 Dec  9 23:42 db2audit.instance.log.0

 

 

For more information on db2audit, please refer:

 

http://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0002072.html?cp=SSEPGG_9.7.0%2F3-6-2-6-13

The post How to extract data using db2audit appeared first on Xtivia.

]]>
http://www.xtivia.com/db2audit/feed/ 0