Wednesday, September 6, 2017

Power Shell Commands

🌟 Please Note, Power Shell commands shown in this post are based on my work on live environment, tried& tested several time over the years, however, copying them and using without any customization might not give desired results so get an idea, make your own them and use them well!

 #Adding a static route  

Route Add -p 172.xxx.xxx.xxx mask 255.255.255.xxx 172.xxx.xxx.xxx 


#Check Routes
route print


#Check BIOS
Get-WMIObject Win32_Bios


#Adding DNS Records
Import-Module DNSShell
Import-CSV c:\DNS\newHostserp.csv | %{
New-DNSRecord -Name $_."HostName" -RecordType A -ZoneName xyz.local -IPAddress $_."IPAddr"
}  


#Bulk adding Display Name against AD users
Import-Csv user.csv | Foreach { Set-ADUser -Identity $_.sAMAccountname -DisplayName $_.DisplayName }


#Display Services & Process of another computer "DC1"
TaskList /S DC1 /svc /fi “imagename eq svchost.exe"


# DHCP Remove authrized server from AD
Netsh DHCP delete server 2003-dc1.contoso.com 172.xxx.xxx.xxx


#Get a list of Users with last logon time from domain relic.local into a CSV file last_login.csv
Get-ADUser -Filter * -SearchBase "DC=relic,DC=local" -ResultPageSize 0 -Prop CN,lastLogonTimestamp | Select CN,@{n="lastLogonDate";e={[datetime]::FromFileTim($_.lastLogonTimestamp)}} | Export-CSV -NoType last_login1.csv


#Get a List of AD Users in domain "relic.local" Exported to CSV file SamAccountNames.CSV at location C:\Temp
Get-ADUser -Filter * -SearchBase "DC=relic,DC=local" -ResultPageSize 0 | ft SamAccountName >>c:\Temp\SamAccountNames.csv


#Get A List of Last Logon Timestamp for Users in a CSV File SamAccountNames.csv belonging to domain relic.local

Get-ADUser -Filter * -SearchBase "DC=relic,DC=local" -ResultPageSize 0 | ft SamAccountName >>c:\abc\SamAccountNames.csv
Import-Module c:\abc\GetADUserLastLogonTime.psm1
Get-OSCLastLogonTime -CsvFilePath "C:\Temp\SamAccountNames.csv" >>c:\abc\LogOnDetails.csv


#Get Extended Properties of a User
Get-Aduser -filter * -searchbase "dc=relic,dc=local" -properties Telephonenumber|select displayname, givenname, sn, telephonenumber 


#Set Extended properties of a user
Set-ADUser -Identity User1 -EmployeeId 1234


#Set Extended properties of users (employee ID only) in bulk from a file
Import-Csv user.csv | Foreach { Set-ADUser -Identity $_.sAMAccountname -EmployeeID $_.EmployeeID }


# Group Policies Applied on a Computer
GpResult /H test.HTML


#Reset WinRM and WinMGMT
Net Start winrm 
Enable-PSRemoting -Force 
net start winmgmt
winmgmt /salvagerepository


#Check Integration Services Version of a VM from Host
Get-VM | ft name, integrationservicesversion


#Check all MAC addresses against unicast and multicast NLB
WLBS
WLBS /?
WLBS Display
WLBS ip2mac 172.xxx.xxx.xxx


#Service Query net logon
sc query X netlogon


#Find and Forcefully Stop a not responding service
Get-Service | Where-Object {$_.Status -eq 'StopPending'} | Format-List * -Force

Get-Service | Where-Object {$_.Status -eq 'StopPending'} | Stop-Service -Force


#Find and Stop a not responding service on a remote server DC
Get-Service -ComputerName "DC" | Where-Object {$_.Status -eq 'StopPending'} | Format-List * -Force

Get-Service -ComputerName "DC" | Where-Object {$_.Status -eq 'StopPending'} | Stop-Service -Force


# Replication Status of Domain Controller named "DC"
repadmin /showrepl

dcdiag /replsource:DC


#Display full data in a column where you get "...." instead of data 
$FormatEnumerationLimit =-1


#Kill a task forcefully having PID 4692
TaskKill /F /PID 4692


#Find a Task PID for isactrl
sc queryex isactrl
sc queryex wuauserv


#Windows Update Commands
wuauclt /detectnow
wuauclt /reportnow
wuauclt /updatenow
wuauclt /resetauthorization /detectnow
wuauclt.exe /resetauthorization /detectnow


#NETSH WinHTTP (Works on CMD with Elevation)
Netsh WinHttp Show Proxy
Netsh WinHttp Reset Proxy


#Script to Reset WSUS Authorization (Make a bat file)
net stop wuauserv
reg Delete HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate /v PingID /f
reg Delete HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate /v AccountDomainSid /f
reg Delete HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate /v SusClientId /f 
reg Delete HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate /v SusClientIDValidation /f
net start wuauserv
wuauclt.exe /resetauthorization /detectnow
pause


# Move WSUS Updates Directory to a new location at F:  Drive
1- Create Folder WSUS in new location F:\WSUS
2- Go to directory location of file WsusUtil.exe
3- WsusUtil.exe movecontent F:\WSUS\ F:\WSUS\move.log


# Troubleshoot WSUS Error 80004002
Go to RUN and try these one by one
regsvr32 wuapi.dll
regsvr32 wuaueng.dll
regsvr32 wuaueng1.dll
regsvr32 wucltui.dll
regsvr32 wups.dll
regsvr32 wups2.dll
regsvr32 wuweb.dll


# Extract DHCP Reservations List
Get-DHCPServerV4Scope | ForEach {

    Get-DHCPServerv4Lease -ScopeID $_.ScopeID | where {$_.AddressState -like '*Reservation'}

} | Select-Object ScopeId,IPAddress,HostName,ClientID,AddressState | Export-Csv ".\$($env:COMPUTERNAME)-Reservations.csv" -NoTypeInformation

Monday, January 25, 2016

Listing Active Directory Users with Last Log On Time Stamp

In large Active Directory environments it is always a challenge for administrators to track down the users which have not logged on for while because they have either left the organization or were initially created twice due to some misunderstanding by Human Resource Department.

This thing impacts licensing cost as well as capacity planning.

In order to get a list of all users with their last log on time stamp, we can use combination of some commands and a script that will export the information in a ".CSV" file for our convenience.  

Environment:


Domain Name:   relic.org
Temporary Location on a DC:   C:\Scripts
Pre-Built Module Name:   "GetADUserLastLogonTime.psm1"   (Available Here)  Tech Net Gallery Link  

Step 1-


Create a new folder on one of your domain controllers on a suitable location.

I have used following example for this purpose

C:\Scripts 


Step 2- 


Log on to the domain controller and run following command

Get-ADUser -Filter * -SearchBase "DC=relic,DC=org" -ResultPageSize 0 | ft SamAccountName >>c:\Scripts\SamAccountNames.csv

This command will extract a list of user names to the desired destination in ".CSV" format

Step 3- 


Open this file and remove blank rows, blank spaces and any rows with dotted line (----)  from the list and save changes.

Here is an example of correct and incorrect file data for next steps

























Step 4- 


A pre-built script is used to perform two actions

(A) Read the list of users we created in step 2
(B) Put the last logon time stamps against each user ID

So first we will import this module into domain controller server using this command

Import-Module C:\Scripts\GetADUserLastLogonTime.psm1

Step 5- 


Run the following command to

Get List of Users, Put last logon time stamps against each and Export to another new ".CSV" file which is going to be our final output file

Get-OSCLastLogonTime -CsvFilePath "C:\Scripts\SamAccountNames.csv" >>c:\Scripts\LogOnDetails.csv

The result might look like as shown in screen shot below

Please note that encircled SAM accounts are the one which have never logged on and that is why they are all showing the same unrealistic time stamp.





Friday, December 18, 2015

Verifying the Integration Services Version on Host and Guest Machines

Hyper-V Integration services play a supportive role in administration of virtual environments by providing a lot of small but useful functionalities. 
For many third party components such as Veritas NetBackup to run smoothly there is a requirement that both client and server must be running the same version of integration services on them. 
There are two ways to determine the versions;
Method 1-
On the Server and Guest both; you may find the file "VMMS.EXE"on the following location
C:\Windows\System32
The "Properties" tab shows exact version of file


Method 2-
You can also verify the versions from registry
On the Guest: 
HKLM\SOFTWARE\Microsoft\Virtual Machine\Auto\IntegrationServicesVersion

On the Host: 
HKLM\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\Virtualization\GuestInstaller\Version\Microsoft-Hyper-V-Guest-Installer-Win60-Package

Monday, October 19, 2015

Adding Multiple Resource Records in a Microsoft Based DNS Server

It happens sometimes that you have to add multiple Host records in a Windows Server 2012 R2 based DNS server. Using Power Shell script can do the magic.
There are 3 main Phases,

      A-   Preparing DNS Server

In order to run DNS related commands we need “DNSShell” module to be extracted to DNS server. This module is not available by default and can be downloaded from this URL;


-          Download and extract the “DNSShell” module in following location (considering C: is the home directory on your server)
C:\Windows\System32\WindowsPowerShell\v1.0\Modules

      B-   Preparing the CSV File containing desired entries

-          You need at least two parameters
1-      Host Name
2-      IP Address



The entries should look like as shown in the screen shot



-          Save this file as .CSV, in this example I have used “newhosts.CSV” as file name.

 C- Execution of Script


-          Create a new Folder at this location
C:\DNS-Temp

-          Place the csv file in the folder “DNS-Temp”
Following script will be used to perform this job (I have done modifications in the script to fit in my example. These modifications can be seen as bold and italic)

Import-Module DNSShell
Import-CSV c:\DNS\newHosts.csv | %{
New-DNSRecord -Name $_."HostName" -RecordType A -ZoneName relic.org -IPAddress $_."IPAddr"
}  

-          The Power Shell the script must run without any error messages.




-          A comparison of Forward lookup zone before and after running the script can be seen in figures below.

Before:




           After:















Friday, August 16, 2013

Rollout Strategy for Data Loss Prevention

Implementation of Data Loss Prevention (DLP) system in an organization is important for security of sensitive information however it has always proved very tricky keeping in view the outcome, sales representatives boast about. It is a fact that successful DLP implementation achieves what it claims but it takes a lot of time and efforts from the organization to get what it wants at the initial stages of implementation. Since there is normally a huge price tag associated with an end to end DLP solution, companies find it hard to justify the amount of time it will take to look effective. 

It is always a good practice to follow these steps as part of a lengthy exercise, I have found this sequence successful while applying in a number of large organizations.


(Note: Some terminologies in this article are taken from Symantec DLP solution components)

A. Enable or Customize Policy Templates 


B. Discover 
    - Identify scan targets 

    - Run scan to find sensitive data on network & endpoint data 


C. Monitor 
    - Inspect data being sent 
    - Monitor network & endpoint events 

D. Protect 
    - Block, remove or encrypt 
    - Quarantine or copy files 
    - Notify employee & manager 

E. Re-mediate and report on risk reduction
The whole process is quite time taking and requires gradual improvement constantly. I recommend adopting a phase wise approach achieving all these goals towards data protection. 

Phase I (Policies and Templates) 

 1. Data Classification 

 2. Identification of data classified with applicable Symantec DLP policies

 3. Introduction of Data into DLP System


- Sample data population for Exact Data Matching techniques (EDM) based policies 

- Sample data population for Described Content Matching (DCM) based policies

- Sample data population for Indexed Data Matching techniques (IDM) based policies

- Population of +ive and –ive samples for Vector Machine Learning techniques (VML)

 4. Start monitoring SMTP and HTTP traffic

 5. Identify stake holders for notification receiving and access restrictions 


Phase II (Pilot Audience) 
Notification will be for Administrators only

Network 
1- EDM Policy Implementation on Pilot users 

2- DCM based Policy Implementation on Pilot Users

3- IDM based Policy Implementation on Pilot Users

Databases
1. EDM Policy Implementation on Pilot data repositories 

Endpoints
1. EDM Policy Implementation on Pilot users 

2. DCM based Policy Implementation on Pilot Users

3. IDM based Policy Implementation on Pilot Users

4. Policy review and fine tuning

Phase III (Prevent Mode Enabled for Pilot Users) Notification will be for Administrators and pilot users only 

1. Extension of notification scope to pilot end users

2. Fine tune applied policies based on end user feedback

3. Identification of policies where action mode must be blocking access

4. Blocking access to resources where required. Action includes notification to all stake holders.

Phase IV (Review and Re-adjustments) 1- Review information and tune up policies based on information from end users and network monitors 

2- Involve concerned departments to identify their specific requirements

3- Re-align scope of network monitors for monitoring of HTTP and SMTP traffic

4- Addition or removal of protocols to be monitored through network monitor

5- Identify production data repositories to protect

6- Implement policies on production data repositories

Phase V (Monitoring Network and Storage) Notifications will be for Administrators and end users 

1. Intimate and educate end users

2. Policies rollout to QA Department users

3. Policies roll out to non-critical user departments (finance, marketing, corporate communication etc)

4. Policies roll out to critical departments (call centers, administration, sales etc)

5. Feedback collection and fine-tuning

Phase VI (Monitoring Endpoints) Notifications will be for Administrators and end users 

1. Policies rollout to QA Department users

2. Policies roll out to non-critical user departments (finance, marketing, corporate communication etc)

3. Policies roll out to critical departments (call centers, administration, sales etc)

4. Feedback collection and policies fine-tuning

Phase VII (Protecting through Network & Storage) Notifications will be for Administrators and end users 

1. Intimate and educate end users

2. Policies modified to QA Department users

3. Policies modification for non-critical user departments (finance, marketing, corporate communication etc)

4. Policies modification for critical departments (call centers, administration, sales etc)

5. Feedback collection and fine-tuning 


Phase VIII (Protecting on Endpoints) Notifications will be for Administrators and end users 

1. Policies rollout to QA Department users

2. Policies roll out to non-critical user departments (finance, marketing, corporate communication etc)

3. Policies roll out to critical departments (call centers, administration, sales etc)

4. Feedback collection and policies fine-tuning

Friday, May 17, 2013

NTLM Authentication VS DC Interface (A comparison of Symantec Web Gateway Features)


Introduction
Symantec Web Gateway (SWG) is state of the art proxy and web filtering solution for corporate local area networks. It has the capability to authenticate end users and provide them secure web browsing experience as per organization’s policies and requirements.

SWG can use one of the 2 authentication mechanisms available in it named

-         Domain Controller Interface (DCI) 
-         NTML based Authentication

SWG can only use one of these methods at a time.

Comparison of NTLM authentication and DC Interface Mechanisms
NTLM and DC Interface provide different kinds of authentication mechanisms and have difference in functionality as well.

DC Interface
DCI works by integrating with domain controllers in an organization. In order to do so we need to install a small piece of software on domain controller. This software actually integrates SWG with corporate domain.

How DCI Works
The SWG connects routinely to the DC to obtain all known users LDAP group information.
1-      User logs on to computer.
2-      DC Interface agent on Domain Controller detects logon event and sends user details and IP address to SWG.
3-      User connects to Internet.
4-      SWG matches connecting IP address to user with information received from DC Interface.
5-      SWG obtains LDAP group membership information from DC.
6-      SWG applies appropriate policy based on LDAP information.
7-      In the event that no matching logged on Domain User is identified, the SWG will apply the next IP based policy or the default policy.

NTLM Authentication
NTLM Authentication configuration accomplishes by providing corporate domain controller’s IP and credentials to SWG’s configuration tab for NTLM authentication. It does not require installation of any additional software on domain controller.

How NTLM Authentication Works
1- SWG Administrator creates an Authentication policy set to Ignore, Authenticate no Enforce or Enforce.
2- The SWG connects routinely to the DC to obtain all known users LDAP group information.
3- User connects to the Internet site via the proxy.
4- Users browser receives an NTLM challenge from the Web Gateway.
5- Users browser responds transparently with a hash of the users credentials.
6- The Web Gateway connects to Domain Controller (noted in LDAP settings) to verify credentials.
8-      If verification succeeds, policies are applied according to LDAP information.
9-      In the event that the NTLM process is not working correctly, or the users LDAP information is not yet known, the SWG will apply the next IP based policy or the default policy.

Comparison of NTLM and DC Interface Features

NTLM has some Advantages over DC Interface


DCI
NTLM
Provides only user identification service.
Provides both Identification and Authentication services
Integration with domain controller requires installation of agent software on at least one of the domain controllers in the environment
Integration with domain controller does not require any additional software
Policy is mapped on the basis of initially assigned IP to a machine. This results is policy mismatch if user switches the machine
Policy is based on username and only works for designated user

Tuesday, February 26, 2013

Mobile Web Browsers The New Men in the Middle?


It was not very long ago when HTC’s dream came true in the form of first ever Android Operating system based smart phone. 

Initially it was a battle to beat the iOS that later gave us “Android” as the most popular smart phone OS (thanks to a timely acquisition by Google).

Most popular!  . . . Agreed,

Very Convenient!  . . . Fine,

But can you compromise your security at the cost of convenience?

Definitely Not!

When it comes to accessing the internet through Mobile Web Browsers, one must understand the risks involved with using these browsers for accessing the websites that contain secure content.  
At the moment at least two web browsers Nokia’s OVI and Opera’s Mini browser for mobile devices are using their own proxy servers to decipher the secure communication transmitted over HTTPS protocol.
These browsers are pre-configured to send all the traffic to their own proxy servers instead of directly sending to the actual destination.

The secure content is stripped to make an examination and changing accordingly. All such companies claim there is no human intervention, access or involvement in inspection and alteration of content.

On the other hand it is a reality that all our secret information transmitted/received through such browsers is visible to one additional entity “The Browser Software Provider” and that is if mentioned in lengthy terms and conditions document somewhere, not a very healthy sign to our privacy.

What Is at Stake?

Personal information including account passwords and pin numbers are the most common examples and potentially most dangerous too.

Why Do They Need to Strip the HTTPS Traffic?


Mainly there are two reasons, 


  • To make the web page look more suitable to mobile phone’s smaller screen by re-organizing them
  • To share the work load on a compact browser by doing the rendering on application provider’s proxy servers.



How Do they Do That?

All such browsers are pre-configured to send all traffic to a certain set of proxy servers.
These servers receive the information, send to original website and receive from the server. Upon receiving the information, the secured bits are decrypted using the public key and adjusted to give user an acceptable browsing experience with limited usage of resources. Here they are doing something good for the user in a way but it has its cost in the form of an elevated data exposure risk.
Now question is, when HTTPS traffic is ripped off, why users are not getting any security certificate warning?
Since the browsers are configured to accept all certificates that contain their respective proxy server’s issued certificate so users do not receive a certificate warning.

How to Avoid this Issue?

  • Apparently if a website’s content is opening differently on your mobile device compared to laptop, it is using a man in the middle.
  • Use a full version instead of compact version wherever possible.
  • Never use mobile browsers to access Email and online bank account portals. Otherwise you have an extra hop which if compromised can never be held responsible for any loss, thanks to the privacy policy document containing 1 million words having a big “I AGREE” button which we press eagerly during installation.  
  • Consider using proxy services