Follow these steps to manually install an ec2 Sentinel VoLTE cloud environment.
An output (bulleted in blue) from a step should be noted, as it forms the input to a subsequent step. |
1 Determine the AMI IDs
Determine the AMI IDs for the VoLTE, HSS and Clearwater images:
-
$VOLTEAMIID
-
$HSSAMIID
-
$CWAMIID.
2 Create a keypair, VPC, and security groups
Create a VPC
If the ec2 account is an ‘ec2 classic’ then it won’t have a default VPC.
If there is no default VPC, follow the steps in this table.
Otherwise, use the default VPC.
Step | Command | Output | |
---|---|---|---|
1 |
Create VPC |
ec2-create-vpc 10.0.0.0/24 |
$VPCID |
2 |
Create Internet Gateway |
ec2-create-internet-gateway |
$GATEWAYID |
3 |
Attach Internet Gateway to VPC |
ec2-attach-internet-gateway $GATEWAYID -c $VPCID |
|
4 |
Create Route Table |
ec2-create-route-table $VPCID |
$ROUTETABLEID |
5 |
Create Route Table entry |
ec2-create-route $ROUTETABLEID -g $GATEWAYID -r 0.0.0.0/0 |
|
6 |
Create Subnet |
ec2-create-subnet -c $VPCID -i 10.0.0.0/24 |
$SUBNETID |
7 |
Modify Subnet: Auto-assign public IP |
ec2-modify-subnet-attribute $SUBNETID -m true |
|
8 |
Set DNS hostnames to true |
ec2-modify-vpc-attribute -c $VPCID -d true |
|
9 |
Associate Subnet to Route Table |
ec2-associate-route-table $ROUTETABLEID -s $SUBNETID |
Create two security groups
Step | Command | Output | |
---|---|---|---|
1 |
Create Access group |
ec2-create-group "VoLTE cloud access" -d "Access Security group for VoLTE environment" -c $VPCID |
$SGID‑ACCESS |
2 |
Create Internal group |
ec2-create-group "VoLTE cloud internal" -d "Internal Security group for VoLTE environment" -c $VPCID |
$SGID‑INTERNAL |
3 Add rules to security groups
Add these rules to the security groups you created:
Security group | Rules |
---|---|
Access |
SIP ec2-authorize $SGID-ACCESS -P tcp -p 5052 -o $SGID-INTERNAL ec2-authorize $SGID-ACCESS -P tcp -p 5054 -o $SGID-INTERNAL ec2-authorize $SGID-ACCESS -P tcp -p 5060 ec2-authorize $SGID-ACCESS -P udp -p 5060 SSH ec2-authorize $SGID-ACCESS -P tcp -p 22 RTP ec2-authorize $SGIDaccess -P udp -p 16000-18000 |
Internal |
All internal traffic ec2-authorize $SGID-INTERNAL -P all -o $SGID-INTERNAL All traffic from access group ec2-authorize $SGID-INTERNAL -P all -o $SGID-ACCESS SSH ec2-authorize $SGID-INTERNAL -P tcp -p 22 GUI Access for REM, HSS, MRF ec2-authorize $SGID-INTERNAL -P tcp -p 8080 ec2-authorize $SGID-INTERNAL -P tcp -p 443 Limit outbound access to only the access and internal security groups ec2-authorize $SGID-INTERNAL -P all -o $SGID-INTERNAL --egress ec2-authorize $SGID-INTERNAL -P all -o $SGID-ACCESS --egress ec2-revoke $SGID-INTERNAL -P all -s 0.0.0.0/0 --egress |
4 Create the instances from the AMIs
Command | Output | |
---|---|---|
1 |
ec2-run-instances $VOLTEAMIID -t m3.large -k $KEYPAIR -s $SUBNETID --associate-public-ip-address true -g $SGID-INTERNAL |
$VOLTE_INSTANCE_ID |
2 |
ec2-run-instances $HSSAMIID -t m1.medium -k $KEYPAIR -s $SUBNETID --associate-public-ip-address true -g $SGID-INTERNAL |
$HSS_INSTANCE_ID |
3 |
ec2-run-instances $CWAMIID -t m3.large -k $KEYPAIR -s $SUBNETID --associate-public-ip-address true -g $SGID-INTERNAL |
$CW_INSTANCE_ID |
|
5 Note IP addresses and DNS names
Note the following:
internal IP address of each of the servers |
|
---|---|
internal DNS name of each of the servers |
|
external IP address of each of the servers |
|
external DNS of each of the servers |
|
6 Add entries to each instance in /etc/hosts
ssh into each instance and add these entries to /etc/hosts
:
Instance | Entry |
---|---|
volte-instance |
$VOLTE-INTERNAL-IP |
hss-instance |
$HSS-INTERNAL-IP |
clearwater-instance |
$CW-INTERNAL-IP |
7 Update configuration
Update these configurations:
Configuration | What to update | Command |
---|---|---|
HSS Application Server |
rhino address |
rhinosipaddr="sip:$VOLTE-INTERNAL-DNS:5060;transport=tcp" + Via HSS Admin GUI --> Services / Application Servers page or ssh ec2 hss + mysql -u root -ppassword -D hss_db -e "update application_server set server_name = '$rhinosipaddr' where name = 'Rhino_AS';" |
HSS S-CSCF |
clearwater address |
cwsipaddr="sip:$CW-INTERNAL-IP:5054" + Via HSS Admin GUI --> Network Configuration / Preferred S-CSCF Sets page or ssh ec2 hss + mysql -u root -ppassword -D hss_db -e "update preferred_scscf_set set scscf_name = '$cwsipaddr' where name = 'sprout';" |
clearwater |
HSS and upstream details |
ssh ec2 cw + vi /etc/clearwater/config + upstream_hostname=$CW-INTERNAL-IP + hss_hostname=$HSS-INTERNAL-DNS + hss_port=3868 |
/etc/clearwater/s-cscf.json |
clearwater hostname |
ssh ec2 cw + vi /etc/clearwater/s-cscf.json { "s-cscfs" : [ { "server" : "sip:$CW-EXTERNAL-DNS:5054;transport=TCP", "priority" : 0, "weight" : 100, "capabilities" : [] } ] } |
8 Effect changes
To effect the configuration changes:
Step | Command | |||
---|---|---|---|---|
1 |
Install Ralf and point clearwater config at Ralf |
ssh ec2 cw + vi /etc/clearwater/config + #ralf_hostname=
|
||
2 |
Restart clearwater to pick up changes |
ssh ec2 cw + sudo service clearwater-infrastructure restart + sudo monit stop bono + sudo monit stop ellis + sudo monit restart sprout + sudo monit restart homestead |
||
3 |
Update details in HSS configuration |
ssh ec2 hss + vi /home/ubuntu/HSS/FHoSS/deploy/DiameterPeerHSS.xml + change: + FQDN="$HSS-INTERNAL-DNS" + Realm="example.com" + <Acceptor port="3868" bind="$HSS-INTERNAL-IP" /> + vi /home/ubuntu/HSS/FHoSS/deploy/hss.properties + change: + host=$HSS-INTERNAL-IP + port=8080 |
||
4 |
Restart HSS to pick up changes |
ssh ec2 hss + sudo /etc/init.d/hss stop + sudo /etc/init.d/hss start |