MCT Summit EU – Helsinki, November 28-30 2013

For all of you that are currently holding an MCT status, and also to all of you that are planning to achieve this status, I highly recommend that you consider visiting MCT Summit EU event.

On this event, you will be able to attend large number of sessions from both Infrastructure, Information Worker and soft skills fields. Event gives you great opportunity to interact with other MCTs and also with highly experienced speakers. If you plan to go for MCT status, it will be great that you attend soft skills track. It will give you great perspective on important things that every MCT should know.

Registrations for this event are open, and you can find more information, as well as register at following URL : http://www.mctsummit.eu/ 

I hope that some of you will be able to attend this!

Windows Server 2012 R2 Failover Cluster – Global Update Manager

Pretty interesting new feature is implemented in Windows Server 2012 R2 failover clustering that allows you to manage how cluster database is updated.

Service responsible for this is called Global Update Manager. This service is responsible for updating the cluster database. In Windows Server 2012, you were not able to configure how these updates work, but in Windows Server 2012 R2 it is possible that you  configure the mode of work for Global Update Manager.

Each time the state of cluster changes (for example, when cluster resource is offline) all nodes in the cluster must receive notification about the event, before the change is committed to the cluster database, by Global Update Manager.

In Windows Server 2012, Global Update Manager works in Majority (read and write) mode. In this mode, when change happens to the cluster, majority of cluster nodes must receive and process the update before it is committed to the database. When cluster node wants to read the database, cluster compares the latest timestamp from a majority of the running nodes, and uses the data with the latest timestamp.

In Windows Server 2012 R2, Global Update Manager can also work in All (write) and Local (read) mode. When working in this mode, all nodes in the cluster must receive and process the update before it is committed to the database. However, when the database read request is received, the cluster will read the data from the database copy stored locally. Since all roles received and processed the update, local cluster database copy can be considered as a relevant source of information.

Windows Server 2012 R2 also supports the third mode for Global Update Manager. This mode is Majority (write) and Local (read). In this mode majority of cluster nodes must receive and process the update before it is committed to the database. When the database read request is received, the cluster will read the data from the database copy stored locally.

In Windows Server 2012 R2, default setting for Hyper-V failover clusters is Majority (read and write). All other workloads in the clusters use All (write) and Local (read) mode. Majority (write) and Local (read) is not used by default for any workload.

Using digital signatures in emails

Implementing digital signatures in Exchange/Outlook environment is not a very complex. However, it requires that you understand how this technology work, and also must have some infrastructure background implemented.
Digital signatures actually protect the content integrity. They don’t provide any protection in a meaning that content of the message can’t be intercepted and read by someone else. However, if the content is altered during transport, digital signature will alert you on this.

When an author digitally signs a document or a message, the operating system on his machine creates a message digest which ranges from between a 128-bit and to a 256-bit number. It is generated by running the entire message through a hash algorithm. This number then is then encrypted by using the author’s private key, and then it is added to the end of the document or message.

When the document or message reaches the recipient, it will go through same hash algorithm as when it was digitally signed. Also, the recipient uses the author’s public key to decrypt the digest that is added to the message. After it is decrypted, it is compared to the digest that the recipient has generated. If they are the same, the document or the message was not altered during transport. Also, if the recipient is able to decrypt the digest by using the author’s public key, this means that the digest was encrypted by using author’s private key, and that confirms the author’s identity. At the end, the recipient also verifies the certificate that was used to prove author’s identity. During this check, the validity period, CRL, subject name, and certificate chain trust also are verified. Make sure that certificates that you use for digital signatures have valid CDP and AIA locations defined.

To implement digital signatures in internal communications, you just need to issue certificates based on the User template. This certificate template is present by default on each Windows Server CA. Of course, you can also use a custom template for this, or you can use smart card certificates for digital signature. This is actually pretty common if smart card infrastructure is deployed. You must issue certificates to all users that who use digital signatures, as authors (don’t need to have one just to read digitally signed message). You can issue the certificate without any user intervention if you use autoenrollment. Also, users must use an application that supports content signing. The digital signatures are ready to be used after the certificate is issued and configured in the application. Certificate for digital signature will be mostly automatically configured in Outlook, so the end user will not need to perform any configuration. If you want to use digital signature in OWA, you will need to install latest S/MIME controls. For mobile platforms and digital signatures, things are not so simple. At the moment, most mobile platforms do not support functionality of digital signature in an email (although ActiveSync does support it on protocol level).

However, if you want to send digitally signed content outside of your organization, you can experience CA trust issues. In this scenario, a recipient is not in the same domain as the author, so it does not trust a the CA that issued a the certificate for the digital signature. Although this kind of digital signature will still be valid from the aspect of content protection perspective, an application being used will probably generate a warning on the recipient side.

If you have a need to send digitally signed content to recipients outside of your organization, I recommended that you buy certificates from a public, globally trusted, CA.

Exchange Server 2013 CU2 is released

For all of who wait for a first SP before you deploy new version of Exchange Server, think again. While Microsoft will still ship Service Packs for Exchange, they also decided to go with more frequent updates, released quarterly, in order to fix most significant bugs, but also to provide new functionalities.

Yesterday, Cumulative Update 2 for Exchange Server 2013 was released, and besides fixing some known (and unknown) bugs, it provides pretty much of new and enhanced functionalities. In these areas, Microsoft has provided new or improved functionalities:

  • Per-server database support
  • OWA Redirection
  • High Availability
  • Managed Availability
  • Cmdlet Help
  • OWA Search Improvements
  • Malware Filter Rules
  • CU2 can be downloaded here: Exchange Server 2013 CU2. As before, this is full install, rather then just an incremental upgrade, but you can use it for both purposes – green field installation or upgrade of existing Exchange 2013. Microsoft still didn’t publish release notes for CU2, but pretty good overview of what’s new can be found here.

    Some tips for troubleshooting AD CS

    In last few weeks I was troubleshooting some PKI deployments, based on Windows Server 2008 and 2012, so I decided to share some troubleshooting tips from the field.

    In first case, customer deployed a Windows Server 2008 R2 Standard edition, and configure CA role on it. Since 2008 R2 supports creating and managing  of  certificate templates, there was no need to deploy Enterprise. However, attempt to install ForeFront Identity Manager 2010 R2 CA files failed, because FIM setup wizard was looking for Enterprise or Datacenter on CA. We decided to do online upgrade to Enterprise version by using dism tool and that went fine. However, from that point CA role was not able to see any custom certificate templates from AD DS, nor it was able to create new, although it was officially running Windows Server 2008 R2 Enterprise. Solution was to fix things by using ADSIEdit tool. I ran ADSIEdit and then connected to configuration partition of AD DS and opened CN=Configuration | CN=Services | CN=Public Key Services | CN=Enrollment Services. In this key, right click the problematic CA name and choose to open Properties. Switch to Attributes and look for flags attribute. For Enterprise CAs this attribute should have value 10. In my case, this value was 2. After changing this manually to 10, and restarting AD CS, everything was fine.

    In second case, customer was having huge number of failed and pending requests on his CA, as a result of improperly configured autoenrollment. We are talking about 10000+ failed or pending requests. I had to clean up this mess, and I used fairly simple method to do this. If you execute this command:

    certutil –deleterow 01/06/2013 Request,

    as a result all pending and failed requests generated before June 1st 2013 will be deleted. Be aware however that this command can clean up around 2500 rows in one pass. If you have more requests to clean, command will throw an error after it’s done. Don’t worry about that, just re-run this same command few times, until all is cleaned up.

    Similar, if you have large number of expired certificates in your Issued certificate store on CA, you can use similar command to clean them up. Execute:

    certutil –deleterow 01/06/2013 Cert,

    and all certificates expired up to June 1st 2013, will be deleted.

    And if you need to delete some specific request, make sure that you find appropriate requestID and execute this :

    certutil –deleterow RequestID.

    After you clean up the mess on the CA, it’s a good idea to defrag the CA database. Same utility as for AD DS DB defrag is used, which is eseutil. Just run eseutil /d pathtoCAdbfile.

    Mail flow issues in Exchange Server 2013

    Ever since Exchange Server 2013 was released some users are experiencing pretty annoying mail flow issue, mostly manifested like messages stuck in Outbox or Drafts folder in Outlook or Outlook Web App.

    While this issue is still not officially confirmed by Microsoft, there are, however, several solutions that can resolve it. In this post, I will present solutions known so far to resolve this. Before you start, make sure that you have latest CU installed on your Exchange Server 2013.

    First, you should check if mail flow issue is maybe caused by performance issue on Exchange server. Sometime, if the Exchange server is low on system resources, it will stop some services. If you have performance issue with your Exchange, you will definitely have it recorded in Event Viewer, so make sure that you check that first.

    If you are fine with available resources and performance, but still experiencing mail flow issue, try to manually restart Exchange transport services. If you are running both CAS and MBX roles on the same machine, you have to restart these three services : Microsoft Exchange Frontend Transport, Microsoft Exchange Transport Delivery and Microsoft Exchange Transport Submission. This usually helps if you experience mail flow issue after your restart your Exchange server.

    DNS configuration on the Exchange Server is also pretty usual cause for mail flow issue. To make sure that you have proper DNS configuration, open Exchange Admin center, navigate to servers, and then select your Exchange server(s) and click Edit on toolbar. Now, navigate to DNS lookups, select your network adapter and manually enter the DNS server that your Exchange server should use for internal and external lookups. Most likely, it will be your local DNS.

    If this doesn’t help, you can also try to prevent your Exchange transport from using DNS IPv6. However, for this, you should edit Exchange transport config files. Navigate to your Exchange installation folder, open BIN directory and find following files:

    · Edgetransport.exe.config

    · Msexchagnesubmission.exe.config

    · MSExchangedelivery.exe.config

    To each of these files, you should add following line:

    <add key= "DnsIpv6Enabled"  value = "false” />

    Be aware, however, that EdgeTransport.exe.config  file already has the entry but it is sent to true, so you should just change it to false.

    After you do this, it is recommended to restart transport services, or the whole Exchange server.

    If you have your own solution for mail flow issue, not listed here, post a comment.

    Moving mailboxes from Exchange 2010 organization to Exchange 2013 organization – Part 2

     

    After you enabled Mailbox Replication Proxy service on the source Exchange Server, it is a good idea to test its functionality. You can easily do it by executing Test-MRSHealth cmdlet. Make sure that you have value True in each Passed row, for each test, and you’re good to go.

    Before actually moving a mailbox, you should prepare it for moving. Actually, you have to migrate a user object first and prepare mailbox move request. Luckily, Microsoft has provided a script for this. Steps for preparing an object move are as follows:

    1.       Open Exchange Management Shell on the destination CAS server and change the path to “C:\Program Files\Microsoft\Exchange Server\v15\scripts”

    2.       Type, $Local = Get-Credential and press Enter. When prompted, provide admin credentials for new (destination) organization. That is the org to which you move the mailbox.

    3.       Type $Remote= Get-Credential and provide credentials for source organization. By executing these steps we are actually storing credentials for administrators in both organizations. These credentials are used by the script in the next step.

    4.      Type:

    .\Prepare-MoveRequest.Ps1 -Identity “User UPN” -RemoteForestDomainController FQDN_OF_SourceDC -RemoteForestCredential $Remote -LocalForestDomainController FQDN_OF_LocalDC -LocalForestCredential $Local -TargetMailUserOU "OU=OUNAme,dc=domain,dc=extension"  – you should fill italic text with your own values
    (example : .\Prepare-MoveRequest.Ps1 -Identity [email protected] -RemoteForestDomainController dc-srv-01.dizdarevic.ba -RemoteForestCredential $Remote -LocalForestDomainController dc-srv-01.dizdarevic.local -LocalForestCredential $Local -TargetMailUserOU "OU=IT,dc=dizdarevic,dc=local" )

    5.       After you execute this PS script you should get the reply : 1 mailbox(es) ready to move

    Now, if you open Active Directory Users&Computers in destination domain, you should find user object created (and disabled, since we didn’t move password). If the object is there, we are ready to move the mailbox. We will do it in Exchange Admin Center in Exchange Server 2013.

    So, open EAC and perform following:

    Click recipients and then click the migration tab. Click New and choose Move to this forest option. In a wizard, click Add and select the user that you just moved. Actually, here you will see only users that are moved using procedure described earlier. Enter credentials for remote/source forest and confirm the name of migration endpoint – that is FQDN of the server where you enabled MRS Proxy service. After that choose the destination database for the mailbox and confirm the admin credentials. Start the migration batch. After that wait for a few minutes until status of user object becomes Synced. Then click Complete this migration batch and wait until the status of the object becomes Completed. And, you’re done!

    After mailbox is migrated, all you have to do is to set the password on the moved user account and to enable that account. After that, user can login in the new forest, and will have its mailbox content moved.

    So, as you can see, the process is not simple, but it can be done if you carefully follow the steps I provided. If you have troubles, let me know – maybe I can help.

    Moving mailboxes from Exchange 2010 organization to Exchange 2013 organization – Part 1

    Moving mailboxes between different Exchange organizations (which means different AD forests) is not something that is easy thing to do. Luckily, it’s not something you do every day, but when you need it, it’s better to have procedure ready and tested. While authoring 20342 MOC course (Exchange 2013 Advanced Solution) I was making a lab for moving Exchange mailboxes between different Exchange organizations, and I must admit that I spent quite amount of time to make it work. That’s why I decided to pull some important parts and make them available to all my readers.

    First, why would you want to do this? As I said, occasions for this procedure are not often, but sometimes you just have to do this. For example, if you merge two organizations or if you want to have a fresh start with new AD DS and new Exchange, while all your resources are still in old AD DS. Whichever reason you have, you’ll have the same starting point which is – accounts and mailboxes in one organization (Let’s call it OrgA) and (fresh) AD DS and Exchange in another organization (Let’s call it OrgB). When saying organization, I mean Exchange organization, not necessarily another company (but mandatory another AD DS forest). Let’s presume that OrgA has Exchange Server 2010 deployed, while OrgB has Exchange Server 2013, and we want to move mailboxes from OrgA to OrgB.

    Since Exchange Server is very deeply integrated with AD DS, you can’t just move mailboxes – it’s a bit more than that. You also have to move user objects. This can be done in several ways. Easiest thing to do is to use ADMT (Active Directory Migration Tool) which can greatly help in moving AD accounts from one domain to another, together with all attributes, as well as with password sync. You can also use ForeFront Identity Manager for this purpose, although it might be a bit complicated to deploy FIM just to do this, but if you have it in place, you can use it to provision user account objects in another domain. ADMT is definitely best choice, however, at this time, we still don’t have ADMT that can move user objects from Windows Server 2008 domain to Windows Server 2012 domain.

    Luckily, there is a script in Exchange Server 2013, which can great help in doing this. The script prepares the AD DS target object and synchronizes the required attributes for cross-forest moves to work. The script creates the mail enabled user account in the target forest if necessary, or it synchronizes an existing user when possible. This script is called Prepare-MoveRequest.ps1, and it is in the Program Files\Microsoft\Exchange Server\V15\Scripts folder. This script is fairly simple to use, but be aware that this script does not move passwords, so user account that you move will be disabled (and without password) when you move it to OrgB. If you can live with this, you’ll be fine. Otherwise, you will have to use ADMT to move user objects, but then you should be aware that this script in Exchange Server 2010 is not fully compatible with user objects moved with ADMT, at least with current version of ADMT. This is because ADMT does not migrate Exchange attributes on user object, which can cause account in destination domain to look like a legacy Exchange account. The Prepare-MoveRequest script in Exchange Server 2013 supports a new parameter, OverwriteLocalObject, for user objects created by ADMT. The script copies the mandatory Exchange Server attributes from the source forest user to the target user.

    Ok, let’s now see how to do this. First, you have to do some preparations in OrgA in OrgB. In a nutshell, you have to do following:

    · Make sure that connection between OrgA and OrgB is reliable. Also, make sure that name resolution between domains in OrgA and OrgB is working

    · Establish forest trust between OrgA and OrgB. While this is not mandatory, it will definitely make your life easier during this procedure. You can remove the trust later.

    · Deploy mutually trusted certificates on Exchange server in both organizations. You can achieve this in various ways, I will not elaborate on this – just make sure that each Exchange trusts certificate on another one

    After you did all these preparation steps, you should make sure that Mailbox Replication Proxy (MRSProxy) service on the Client Access server in the source Exchange Server organization is running. By default, this service is disabled, and you’ll probably have to make it start. Easiest way to do this is to execute following in EMS:

    Set-WebServicesVirtualDirectory -Identity "EWS (Default Web Site)" -MRSProxyEnabled $true

    Before executing this command it is wise to check identity of your Web services virtual directory. Default value is like in command I just wrote, but it can be different. So, run Get-WebServicesVirtualDirectory | FL and check for the identity attribute.

    With Set-WebServicesVirtualDirectory cmdlet you can also use MaxMRSConnections parameter. The value of this parameter establishes how many mailbox moves you can do simultaneously. The default value is 100. You should reduce this number if the mailbox move is going across a slow link. If you have reliable and fast connection, you can forget about this. But, if you change the value, you should restart Mailbox Replication Service for the change to take effect.

    Before you proceed, there is one thing that you should check to spare some headaches later. When you enable the Mailbox Replication Proxy service on the source Client Access servers, the mailbox move endpoint becomes MrsProxy.svc. In some cases, the IIS configuration is missing the svc-Integrated handler mapping, which results in an error, such as “(405) method not allowed,” when you start moving mailboxes, and that’s not something you want to see. To make sure that this will work, open command line at this path: C:\Windows\Microsoft.Net\Framework\v3.0\Windows Communication Foundation\ , and then execute the following command: ServiceModelReg.exe –r. This command reinstalls the handler mappings in IIS. To check for existing handler mappings in IIS, start the IIS console and then, in the center pane, double-click Handler Mappings, while the virtual directory or website is selected. Just make sure that you see *.svc there.

    Now that we have things prepared, we will start moving some mailboxes in next part of this article. Stay there.

    PST Import tool for Exchange

    Microsoft has released an update for PST import tool which now can work with both Exchange 2013 and Exchange Online. Can be very useful if you want to migrate mail content from user’s machines to their mailboxes.

    Download here : http://www.microsoft.com/en-us/download/details.aspx?id=36789

    Lepide File Server Auditor – file servers under surveillance

    After I wrote last month about Lepide Event Log Manager, this time there’s another interesting software from the same company, intended for surveillance of file servers.
    The ability to monitor the changes that occur in the resources that the file servers host is very useful, especially in situations when it comes to critical documents and content. Basic auditing that Windows Server provides through its group policy and object access auditing can provide basic information, but to locate and correctly interpret information can often be time consuming and sometimes problematic.
    Therefore, the existence of a dedicated software that is focused on this type of surveillance and monitoring, for many organizations is very useful.

    Similar to the Event Log Manager, File Server Auditor shares similar simple and intuitive interface and relatively lightweight configuration. Upon completion of the installation and configuration of this software, which is very simple and has pretty light hardware requirements, it is necessary to add file servers that are being monitored, and to install the agent on them, using the appropriate credentials. After that, it begins the process of real-time monitoring of changes occurring at the server, according to the adjustments that you made in the File Server Auditor console.

    Setting console1

    File Server Auditor as a central element upon which the monitoring is conducted is using rules to control auditing (Audit Rules). Audit rules are formed from multiple components. It is therefore advisable before forming any rules for auditing, to first configure rule sub-components, except in the case when one wants to leave everything on default values ​​(which means to monitor everything all the time, which perhaps is not always the best option). If you prefer a more detailed approach, it is possible to configure the following elements:

    Lists

    · Events: At this point you configure the type of events that you want to follow. For example, files that are opened, red, modified, deleted, renamed, and changes in SACL and DACL lists. Similar events can be tracked for folders as well. Default event list includes all supported events, which generally results in a pile of logs, so it is wise to narrow this list for a bit.

    · Process: It is possible to configure processes that generate changes to the file server resources. Again, by default they are all selected, or if you are interested in some specific, the choice can be set to specific.

    · File Name & File Type: As you would expect, it is possible to filter by file type (which is determined by specifying extensions) or by the name of the file (in which case we can also use wildcards). This can be specified in order to achieve control only over certain files and folders that match your criteria in defined filters.

    · Directory: If you follow the resources contained within a particular folder on the file server, in this place you can determine which folder you want to audit. At the same time it is possible to form a list of one or more folders whose contents we want to follow.

    · Drive: You can also adjust the letter of the drive on the server that is carried out auditing. Since this can vary from server to server, and other options provide ample opportunities for precise filtering, this can be left at the default value, which includes all the drives. Alternatively, it may be possible to disable the system drives (which is usually the letter marked C) and thus focus only logging to files on other drives.

    · Time: The last element (ie the list, as it is called in the console) is an option to define the time range for auditing. Although it is by default set to do the monitoring continuously, it is possible to change and the option to define instance so that auditing is done only at certain intervals.

    From these elements you form the Audit policy and finally the Audit Rule, which contains a list of servers that are being monitored, the identity of users who you want to audit (by default all are monitored, but also it can be further configured), and the policy that is formed earlier.

    Audit rule

    This modular approach to configuration is fairly effective, and once set up the structure easily changes in any of these components. In essence, the configuration components (somewhat awkwardly named list in the user interface) form one audit policy, which is then allocated to the audit rule on the specified server or servers, and the corresponding user (or users).
    Users are defined by the User Group option. Here we can create groups of users who we want to associate with the proper policies for auditing. Groups that are formed here are related only to the application itself and are not visible outside. It is especially nice that you can take users directly from Active Directory, and in the same place you can associate audit policy to the new groups, which shortens and eases configuration.
    The console settings also allows you to configure alerts, which can be sent via email or SMS, in case of an event that is defined by a query, and it is possible to do a backup (and restore if necessary) the configuration. Given that the full configuration of the software can require quite some time, I advise you to be sure to do a backup.

    The second part of the management console is designed for reporting, as a result of what is configured. This part is based on SQL Server reporting, which has to be defined during the software installation. Reports are pretty clear and easy to read, even though the console itself (similar to the one in Event Log Manager) seems a bit archaic. It is interesting that this application layout can be changed by a variety of layouts (eg, Windows XP, Office 2007, Visual Studio, etc), which is not particularly useful, but it’s cute.

    Reports console1
    Predefined reports provided allow the display of all the changes, the changes that apply only to read (successfully and unsuccessfully), to create files and folders (also successfully and unsuccessfully), and modifications that occur on any resource, as well as modification of the permissions on files and folders ( SACL and DACL). Each report can be further defined with filters such as time, server, users, files, folders, processes, and specific events. In essence, the filter can use any configurable parameter that we discussed earlier. In addition, it is also possible to create custom reports.

    Conclusion

    LepideAuditor for File Server is a very useful piece of software. It doesn’t take much resources, nor it has complicated configuration. There are few things that should be improved (like terminology in console, and graphical interface) but, what’s most important, it does the work. More information about this product can be found at Lepide portal.